850 results
Search Results
2. Integrity verification for scientific papers: The first exploration of the text
- Author
-
Shi, Xiang, Liu, Yinpeng, Liu, Jiawei, Cheng, Qikai, and Lu, Wei
- Published
- 2024
- Full Text
- View/download PDF
3. Generating survey draft based on closeness of position distributions of key words.
- Author
-
Sun, Xiaoping and Zhuge, Hai
- Subjects
- *
TEXT summarization , *CURVES - Abstract
Automatically generating a survey draft is a challenge to text summarization research because it needs to select important sentences from important references in a large set of candidate papers for composing sections that are in line with section titles and different sections discuss the most relevant reference papers of different number, which are beyond the capability of previous text summarization approaches as they assume that all candidate papers should be included into one summary. This paper proposes an approach to generating survey draft according to a pattern consisting of sections with titles given by the user who requests the survey. The problem of generating each section can be divided into the following sub-problems: (1) rank the input scientific documents (in short documents) according to the title of a section, (2) determine the number of documents that are most relevant to the title, and (3) rank and select sentences from the selected documents according to the title. A position closeness distance of key word is proposed to rank a set of documents by measuring how closely two key words within section title are distributed within each document, which is used to rank the documents. The rationale is that the positions of the neighboring key words of a section title should be closer in more relevant documents than other words. As different sections have different number of selected documents, a method is proposed to determine the number of documents to be included into the current section based on the slope shape of the sorted rank curve of documents according to the section title. Based on the duality property of the closeness, ranks of sentences within a document can be directly obtained when the document is ranked according to the title of section, and both the importance and coherence of selected sentences can be reflected without extra calculation for ranking sentences. Experiments and manual evaluation show that the proposed methods achieve significant improvements compared with other approaches. The proposed approach is significant in applications as different surveys can be generated according to different patterns given by different users. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Improved network intrusion classification with attention-assisted bidirectional LSTM and optimized sparse contractive autoencoders
- Author
-
Bi, Jing, Guan, Ziyue, Yuan, Haitao, and Zhang, Jia
- Published
- 2024
- Full Text
- View/download PDF
5. A novel cross-domain adaptation framework for unsupervised criminal jargon detection via pre-trained contextual embedding of darknet corpus
- Author
-
Ke, Liang, Xiao, Peng, Chen, Xinyu, Yu, Shui, Chen, Xingshu, and Wang, Haizhou
- Published
- 2024
- Full Text
- View/download PDF
6. Matching cost function analysis and disparity optimization for low-quality binocular images.
- Author
-
Hongjin, Zhang, Hui, Wei, and Huilan, Luo
- Subjects
- *
COST functions , *COST analysis , *GEOMETRIC analysis , *ENERGY function , *IMAGE analysis , *MATCHING theory - Abstract
State-of-the-art dense stereo matching algorithms have achieved excellent performance, demonstrating a capability to attain precise matching in most areas. However, current such methods rarely achieve this when images are captured under poor conditions. To improve the accuracy of the algorithm in such cases, this paper introduces a post-optimization algorithm to rectify matching errors and enhance outcomes. The main research areas of this paper include three aspects. (1) Disparities are classified into reliable and unreliable results based on the analysis of geometric matching relationships, local features in the images, and components within the matching cost function; (2) Subsequent analysis of horizontal image features identifies local characteristic indices calculated through integration along the horizontal axis, which establish specific matching criteria, forming the foundation for a cost volume that encompasses these distinct matches; (3) A redefined matching cost function is applied to previously classified unreliable results to rectify matching errors. This energy function is based on the cost volume above. Experimental results validate the efficacy of the proposed post-optimization algorithm, reducing the average matching errors from 8.66% to 5.85%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A blind signature scheme for IoV based on 2D-SCML image encryption and lattice cipher.
- Author
-
Gao, Mengli, Li, Jinqing, Di, Xiaoqiang, Li, Xusheng, and Zhang, Mingao
- Subjects
- *
IMAGE encryption , *PUBLIC key cryptography , *CIPHERS , *DISCLOSURE , *MAP design , *DATA transmission systems - Abstract
Today's Internet of Vehicles (IoV) faces many security risks in the data transmission process, and image data is more vulnerable to security threats in the transmission process due to its special characteristics such as large amounts of information and high visibility. Therefore, to guarantee the dependability of data transmission in the IoV environment, this paper designs a blind signature scheme for IoV based on two-dimensional sine cosine cross-chaotic mapping (2D-SCML) image encryption and lattice cipher (BSS-IoV). The innovation of this scheme is that it aims at blind signature of image information, blinds it before sending the information, and combines the lattice public key encryption algorithm to better ensure the safe and reliable transmission of information and reduce the risk of information disclosure. To further ensure the security of the scheme, an image encryption algorithm based on 2D-SCML and pixel splitting (2PS-IEA) is proposed, which is used to blind the information and thus reduce the risk of information leakage on the one hand, on the other hand, it is used in the signature process to ensure the security of the signed information. The 2D-SCML is derived from the cross-model structure proposed in this paper. Through simulation results and experimental analysis, the values of NPCR and UACI, respectively, 99.6094% and 33.4635%, are close to ideal values. And 50% of the cut image can also recover the rough information, which indicates that the signature scheme has the security against differential attacks, cut attacks and noise attacks. Moreover, the security analysis shows that the scheme has the anti-tamper, anti-repudiation and traceability. • Designed blind signature scheme for IoV using image encryption and lattice cipher. • Proposed a crossover model structure is proposed. • Designed a 2D-SCML mapping with superior performance based on this model. • Devised an image encryption algorithm based on 2D-SCML and pixel splitting. • The signature scheme and encryption scheme are analyzed experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Bandit algorithms: A comprehensive review and their dynamic selection from a portfolio for multicriteria top-k recommendation.
- Author
-
Letard, Alexandre, Gutowski, Nicolas, Camp, Olivier, and Amghar, Tassadit
- Subjects
- *
RECOMMENDER systems , *FUZZY sets , *ALGORITHMS , *REINFORCEMENT learning - Abstract
This paper discusses the use of portfolio approaches based on bandit algorithms to optimize multicriteria decision-making in recommender systems (accuracy and diversity). While previous research has primarily focused on single-item recommendations, this study extends the research to consider the recommendation of several items per iteration. Two methods, Multiple-play Gorthaur and Budgeted-Gorthaur, are proposed to solve the algorithm selection problem and their performances on real-world datasets are compared. Both methods provide a generalization of the Gorthaur method, which enables it to operate with any Multi-Armed Bandit (MAB) and Contextual Multi-Armed Bandit (CMAB) algorithm as meta-algorithm in a multi-item recommendation scenario. For Multiple-play Gorthaur, an empirical evaluation shows that the use of Thompson Sampling for algorithm selection (Gorthaur-TS) yields better results than the original EXP3 method (Gorthaur-EXP3) and the exclusive use of the optimal algorithm in the portfolio in contextual recommendation problems. Additionally, the paper includes a theoretical regret analysis based on the TS sketch proof applied for this variant of the method. Concerning Budgeted-Gorthaur, experiments show that it allows more flexibility to achieve a suitable trade-off between criteria and a broader coverage of the Pareto set of solutions, overcoming a natural limit of "a-priori" methods. Finally, this paper provides a detailed review, including pseudocodes and theoretical bounds, for all the fundamental MAB and CMAB algorithms used in this study. • Bandit literature lacks formal algorithm review, hindering clarity and comparability. • There is no silver bullet: no algorithm can be the best performer in every instance. • Recommender systems need to balance accuracy, diversity, multi-item recommendations. • Optimal algorithm balances criteria, matching decision maker's preferred trade-off. • Dynamic selection ensures safe performance when optimal algorithm is unknown. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Human activity recognition with smartphone-integrated sensors: A survey.
- Author
-
Dentamaro, Vincenzo, Gattulli, Vincenzo, Impedovo, Donato, and Manca, Fabio
- Subjects
- *
HUMAN activity recognition , *MACHINE learning , *FEATURE selection , *FEATURE extraction , *DETECTORS - Abstract
• Newbie study using standard ML techniques with HAR Application and Discussions. • Activities found in Literature with the corresponding reference. • Co-occurrences between activities and sensors with the corresponding reference. • Summary and comparison among the different datasets found in Literature. • Summary of the experimentation settings with performance scores found in Literature. Human Activity Recognition (HAR) is an essential area of research related to the ability of smartphones to retrieve information through embedded sensors and recognize the activity that humans are performing. Researchers have recognized people's activities by processing the data received from the sensors with Machine Learning Models. This work is intended to be a hands-on survey with practical's tables capable of guiding the reader through the sensors used in modern smartphones and highly cited developed machine learning models that perform human activity recognition. Several papers in the literature have been studied, paying attention to the preprocessing, feature extraction, feature selection, and classification techniques of the HAR system. In addition, several summary tables illustrating HAR approaches have been provided: most popular human activities in the literature with paper references, the most popular datasets available for download (Analyzing their characteristics, such as the number of subjects involved, the activities recorded, and the sensors with online-availability), co-occurrences between activities and sensors, and a summary table showing the performance obtained by researchers. =The paper's goal is to recommend, through the discussion phase and thanks to the tables, the current state of the art on this topic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A new community detection method for simplified networks by combining structure and attribute information.
- Author
-
Cai, Jianghui, Hao, Jing, Yang, Haifeng, Yang, Yuqing, Zhao, Xujun, Xun, Yaling, and Zhang, Dongchao
- Subjects
- *
DENSITY - Abstract
Complex networks have a large number of nodes and edges, which prevents the understanding of network structure and the discovery of valid information. This paper proposes a new community detection method for simplified networks. First, a similarity measure is defined, the path and attribute information can reflect the potential relationship between nodes that are not directly connected. Based on the defined similarity, an Importance Score(IS) is constructed to show the importance of each node, it reflects the density around each node. Then, the simplification processes can be realized on complex networks. On the simplified network, this paper proposes a novel community detection method, in which the community structure of the simplified network is detected. The experiments were conducted on real networks and compared with several widely used methods. The experimental results illustrate that the proposed method is more advantageous and can visually and effectively uncover the community structure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Convolutional neural networks for quality and species sorting of roundwood with image and numerical data.
- Author
-
Achatz, Julia, Lukovic, Mirko, Hilt, Simon, Lädrach, Thomas, and Schubert, Mark
- Subjects
- *
CONVOLUTIONAL neural networks , *CROSS-sectional imaging , *RECOMMENDER systems , *IMAGE recognition (Computer vision) , *FEATURE selection , *HUMAN error , *MACHINE learning - Abstract
Roundwood sorting is still a manual process in many Swiss sawmills, requiring employees to visually inspect and categorize thousands of logs per day. The heavy workload can be both physically and mentally taxing and can lead to increased rates of human error. State-of-the-art automation systems like X-ray log scanners are expensive and difficult to integrate into existing process lines. This paper proposes a novel recommendation system that leverages recent advances in image classification to automate roundwood classification by quality and species. The system integrates a camera to capture cross-sectional images of logs and record numerical data, such as length, taper, and diameter. The analysis of the resulting dataset highlights the challenges of data imbalance and noise, which makes classification difficult and, in some cases, impossible. However, by using selected datasets with reduced noise, state-of-the-art Convolutional Neural Networks (CNNs) can extract quality and species features. Quality models learn from a manually selected and simplified dataset, featuring samples that experts can clearly classify based on the image's information. Species models are trained on a label-noise-reduced dataset, reflecting real-world complexity. The accuracy on the selected dataset for three quality classes is 80%. The species determination is less challenging and reaches 91% accuracy on a synchronized dataset for the main species spruce and fir. Overall, this paper highlights the potential of Machine Learning in augmenting the roundwood sorting processes and presents a novel system that can improve the efficiency and accuracy of the process. [Display omitted] • Automation of roundwood sorting: Replaces manual sorting with image-based AI. • Integrated camera system in roundwood sorting to collect labeled dataset. • Species prediction: 91% accuracy in spruce–fir distinction. • Quality prediction on complexity reduced dataset: 80% accuracy between three main quality levels. • Efficient, adaptable & scalable system which is easy to integrate into existing process lines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A malware detection model based on imbalanced heterogeneous graph embeddings.
- Author
-
Li, Tun, Luo, Ya, Wan, Xin, Li, Qian, Liu, Qilie, Wang, Rong, Jia, Chaolong, and Xiao, Yunpeng
- Subjects
- *
MALWARE , *GENERATIVE adversarial networks , *COMPUTER security , *COMPUTER software industry , *CLASSIFICATION algorithms , *INFORMATION networks - Abstract
The proliferation of malware in recent years has posed a significant threat to the security of computers and mobile devices. Detecting malware, especially on the Android platform, has become a growing concern for researchers and the software industry. This paper proposes a new method for detecting Android malware based on unbalanced heterogeneous graph embedding. First of all, most malware datasets contain an imbalance of malicious and benign samples, since some types of malware are scarce and difficult to collect. Thus, as a result of this problem, the classification algorithm is unable to analyze the minority samples through sufficient data, resulting in poor downstream classifier performance, in light of the fact that adversarial generation networks possess the characteristic of completing data, an algorithm for generating graph structure data is presented, in which nodes are generated to simulate the distribution of minority nodes within a network topology. Then, considering that heterogeneous information networks have the characteristics of retaining rich node semantic features and mining implicit relationships, heterogeneous graphs are used to construct models for different types of entities (i.e. Apps, APIs, permissions, intents, etc.) and different meta-paths. Finally, a new method is introduced to alleviate the over-smoothing phenomenon of node information in the propagation of deep network. In the deep GCN, we first sample the leader nodes of each layer node, and then add a residual connection and an identity map in order to determine the characteristics of the high-order leader. In this paper, a self-attention-based semantic fusion method is also applied to adaptively fuse embedded representations of software nodes under different meta-paths. The test results demonstrate that the proposed IHODroid model effectively detects malicious software. In the DREBIN dataset, which consists of 123,453 Android applications and 5,560 malicious samples, the IHODroid model achieves an accuracy of 0.9360 and an F1 score of 0.9360, outperforming other state-of-the-art baseline methods. • A new generative adversarial network model has been proposed for balancing data. • Heterogeneous graphs are used for modeling malware detection. • A new method is introduced to alleviate the over-smoothing phenomenon. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. MV-Checker: A software tool for multi-valued model checking intelligent applications with trust and commitment.
- Author
-
Alwhishi, Ghalya, Bentahar, Jamal, Elwhishi, Ahmed, and Pedrycz, Witold
- Subjects
- *
ARTIFICIAL intelligence , *PROPOSITION (Logic) , *BLOCKCHAINS - Abstract
Intelligent applications are highly susceptible to uncertainty and inconsistency due to the intense and intricate interactions among their autonomous components (or agents), making their verification theoretically and practically challenging. This paper presents the design and implementation of a new open-source and scalable software tool for modeling and verifying intelligent applications with commitment and trust protocols under both uncertainty and inconsistency settings, using reduction-based multi-valued model checking techniques. The proposed tool is equipped with original and novel algorithms that transform our logics of multi-valued commitment (mv-CTLC) and multi-value trust (mv-TCTL) that we recently introduced to their classical two-valued commitment (CTLC) and trust (TCTL) logic versions as well as to Computational Tree Logic (CTL). Moreover, the tool transforms the mv-CTL to CTL, and it is applicable for the classical model checking by transforming the classical logics of trust and commitment to CTL. To demonstrate the practicality and applicability of the proposed tool in real settings, we present and report experimental results over two blockchain-based applications in the healthcare domain. Finally, we provide discussions and comparisons between the proposed approaches regarding scalability and efficiency. Moreover, we provide packages of more than 11 experiments, including the ones we conduct in this paper and enhanced experiments from previous works. Our findings ensure that the proposed approaches and the software tool that implements them are highly efficient and scalable, giving accurate results under varying conditions. • Design and implementation of an open-source scalable tool for systems model checking. • System modeling with multi-valued logic capturing uncertainty and inconsistency. • Practical demonstration using blockchain-based applications in healthcare domain. • Extensive experiments showing scalability, efficiency and reliability of the tool. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Conceptual clustering with application on FCA context.
- Author
-
Kovács, László
- Subjects
- *
K-means clustering - Abstract
Conceptual clustering is one of the key approaches for automatic concept generation from input contexts. In this paper, we propose an extension of the dominating k-means method. The proposed method introduces a flexible distance metric that enables the approximation of both Euclidean and meet(or join)-based similarity calculations. To increase the approximation accuracy, the method combines the k-means method with a Quality Threshold component. The paper shows that the method can also be used for approximation of formal concept lattices. Based on the performed tests, the method provides an efficient alternative conceptual clustering approach. • Novel extension of the k-means-based conceptual clustering algorithm. • Novel parameterized distance metric to cover different aggregation approaches. • Tool for flexible concept set reduction in formal concept analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Nataf-KernelDensity-Spline-based point estimate method for handling wind power correlation in probabilistic load flow.
- Author
-
Shaik, Mahmmadsufiyan, Gaonkar, Dattatraya N., Nuvvula, Ramakrishna S.S., Muyeen, S.M., Shezan, Sk. A., and Shafiullah, G.M.
- Subjects
- *
WIND power , *PROBABILITY density function , *MONTE Carlo method , *SOLAR energy , *RENEWABLE energy sources , *WIND speed - Abstract
Modern power systems integrated with renewable energies (REs) contain many uncertainties. The proposed method introduces a novel approach to address the challenges associated with wind power generation uncertainty in probabilistic load flow (PLF) studies. Unlike conventional methods that use wind speed as an input, the paper advocates for utilizing wind generator output power (WGOP) as an input to the point estimate method (PEM) in solving PLF. The uniqueness lies in recognizing the distinct behavior of wind power uncertainty, where not all random samples of wind speed contribute to actual wind power production. The paper suggests a Nataf-KernelDensity-Spline-based PEM, combining the Nataf transformation, Kernel density estimation (KDE), and cubic spline interpolation. This innovative integration effectively manages wind power correlation within the analytical framework. By incorporating spline interpolation and kernel density estimation into the traditional PEM, the proposed method significantly enhances accuracy. To validate the effectiveness of the proposed approach, the method is applied to IEEE-9 and IEEE-57 bus test systems, considering uncertainties related to load, wind power generation (WPG), solar power generation (SPG), and conventional generator (CoG) outages. Comparative analysis with Monte Carlo simulation (MCS) results demonstrates that the proposed method outperforms the conventional PEM in terms of accuracy. Overall, the paper contributes a pioneering solution that not only highlights the importance of using WGOP as an input in PLF but also introduces a sophisticated method that surpasses traditional approaches, improving accuracy in power system studies involving renewable energy integration. The accuracy of the proposed method is validated by comparing its results with those obtained through Monte Carlo simulation (MCS), where the proposed method yields more accurate results than the conventional PEM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Deep semi-supervised learning for medical image segmentation: A review.
- Author
-
Han, Kai, Sheng, Victor S., Song, Yuqing, Liu, Yi, Qiu, Chengjian, Ma, Siqi, and Liu, Zhe
- Subjects
- *
SUPERVISED learning , *DEEP learning , *IMAGE segmentation , *DIAGNOSTIC imaging , *COMPUTER vision , *IMAGE analysis - Abstract
Deep learning has recently demonstrated considerable promise for a variety of computer vision tasks. However, in many practical applications, large-scale labeled datasets are not available, which limits the deployment of deep learning. To address this problem, semi-supervised learning has attracted a lot of attention in the computer vision community, especially in the field of medical image analysis. This paper analyzes existing deep semi-supervised medical image segmentation studies and categories them into five main categories (i.e., pseudo-labeling, consistency regularization, GAN-based methods, contrastive learning-based methods, and hybrid methods). Afterward, we empirically analyze several representative methods by conducting experiments on two common datasets. Besides, we also point out several promising directions for future research. In summary, this paper provides a comprehensive introduction to deep semi-supervised medical image segmentation, aiming to provide a reference and comparison of methods for researchers in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Fusion of theory and data-driven model in hot plate rolling: A case study of rolling force prediction.
- Author
-
Dong, Zishuo, Li, Xu, Luan, Feng, Meng, Lingming, Ding, Jingguo, and Zhang, Dianhua
- Subjects
- *
HOT rolling , *ARTIFICIAL neural networks , *MODEL theory , *MANUFACTURING processes , *SEARCH algorithms - Abstract
As one of the most critical variables in the hot rolling process, the accuracy of rolling force prediction is directly associated with production stability and product quality. Purely data-driven approaches, however, are severely constrained by the quantity and quality of data, posing challenges for further enhancing the accuracy of rolling force prediction. In this paper, a theory fusion deep neural network (DNN) modelling approach was proposed and applied to the prediction of rolling force during hot plate rolling. In terms of model establishment, the novel NN structure was designed in consideration of the rolling mechanism, and senior variable inputs were added at shallow locations in the network to reduce the loss of critical information. In terms of model training, the method of using rolling theory to guide the initialization of the model was proposed to enable the model to learn the theoretical features more completely in the pre-training phase. Finally, a method to optimize the overall structure of the model using the sparrow search algorithm (SSA) was proposed to ensure the best prediction performance. The model was tested with the data in the developed platform, and the results indicated that the proposed method achieves the best accuracy and stability in this paper, and the response relationship between model inputs and output was consistent with existing theoretical knowledge. Thus, the model can be trusted and flexibly applied to the actual manufacturing processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Scheduling optimization of underground mine trackless transportation based on improved estimation of distribution algorithm.
- Author
-
Li, Ning, Wu, Yahui, Ye, Haiwang, Wang, Liguan, Wang, Qizhou, and Jia, Mingtao
- Subjects
- *
MINES & mineral resources , *DISTRIBUTION (Probability theory) , *TRANSPORTATION costs , *PARTICLE swarm optimization - Abstract
The trend in underground mine development is trackless transportation, and the scheduling optimization of underground mine trackless transportation is a current research hotspot. This paper proposes a truck scheduling optimization method for underground mine trackless transportation based on an improved estimation of distribution algorithm to address the truck scheduling problem in the underground mine trackless transportation process. The transportation process of transport trucks in underground mines is analyzed. The dispatching model of transport trucks in underground mines is constructed based on the requirements of reducing transportation costs and increasing transportation efficiencies, taking into account the truck meeting situation in the ramp section and minimizing the total shift transportation distance and the total waiting time of transport trucks as the objective functions. The improved estimation of distribution algorithm is used to solve the truck scheduling model, resulting in the optimal ore blending and scheduling schemes. The comparative analysis employs a genetic algorithm, particle swarm optimization algorithm, and immune algorithm. The results demonstrate that, compared to other algorithms, the improved estimation of distribution algorithm proposed in this paper has superior performance in terms of convergence speed and the search for the optimal solution. The total number of transportation tasks associated with the optimal ore allocation scheme is at least 82, and the waiting time associated with the optimal scheduling scheme is reduced to 7.5 min. The operation time chart of transport trucks calculated by the optimal dispatching scheme can clearly depict the location of each transport truck at any time during a shift's working time, which has significant guiding significance for the actual truck transportation in the mine. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. An intrusion detection algorithm based on joint symmetric uncertainty and hyperparameter optimized fusion neural network.
- Author
-
Wang, Qian, Jiang, Haiyang, Ren, Jiadong, Liu, Han, Wang, Xuehang, and Zhang, Bing
- Subjects
- *
INTRUSION detection systems (Computer security) , *CONVOLUTIONAL neural networks , *PARTICLE swarm optimization , *FEATURE selection , *ALGORITHMS , *HUMAN fingerprints , *COMPUTER network security - Abstract
Intrusion Detection System (IDS) can ensure the network security by identifying network intrusions according to the abnormal traffic data. However, the intrusion detection data has the problem of high dimensionality and changes with network and attack environments, which leads to the poor performance and poor portability of intrusion detection algorithms. Therefore, this paper proposes an intrusion detection algorithm based on joint symmetric uncertainty and hyperparameter optimized fusion neural network. Firstly, a feature selection method based on symmetric uncertainty and approximate Markov blanket is proposed, which fully considers the correlation and redundancy of features, and also the correlation between combined features and the class label, so as to reduce the data dimensionality. Secondly, the CNN-LSTM classifier fused with Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) is used to extract the spatial features and temporal features to improve the classification performance. Finally, the Particle Swarm Optimization (PSO) algorithm is improved and used to automatically optimize the hyperparameters of the classifier, so that the classifier can be applied to different intrusion detection datasets with better generalization ability and portability. Experiments have verified the effectiveness and superiority of the proposed algorithm on multiple evaluation indicators. • An effective algorithm for intrusion detection is proposed in this paper. • Feature selection is based on symmetric uncertainty and approximate Markov blanket. • A fusion neural network is constructed to extract the spatial and temporal features. • The PSO algorithm is improved to automatically optimize the hyperparameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. EEG sensor driven assistive device for elbow and finger rehabilitation using deep learning.
- Author
-
Mukherjee, Prithwijit and Halder Roy, Anisha
- Subjects
- *
ASSISTIVE technology , *ELECTROENCEPHALOGRAPHY , *DEEP learning , *ELBOW , *REHABILITATION , *MOTOR imagery (Cognition) , *DETECTORS , *DATA recorders & recording - Abstract
[Display omitted] In today's world, a large number of people suffer from motor impairment-related challenges. Rehabilitation is the main method used to overcome these difficulties. The goal of the paper is to develop a deep learning-based electroencephalogram (EEG) sensor-controlled assistive device for the rehabilitation of elbow and finger movements. We have introduced an innovative finger and elbow movement rehabilitation method using an EEG sensor. The EEG sensor's recorded EEG signals, attention values, and meditation values have been used for this purpose. This rehabilitation technique helps a person perform basic finger movement rehabilitation motions, such as finger extension and flexion. Also, basic elbow movement rehabilitation exercises, i.e., elbow extension and elbow flexion, can be performed by using this rehabilitation technique. In this research, an EEG sensor records the prefrontal lobe's EEG signals, attention value, and meditation value of a person while the person performs motor imagery. A deep learning-based CNN-TLSTM (Convolution Neural Network-tanh Long Short-Term Memory) model with attention mechanism has been designed for decoding the EEG sensor recorded data. The trained deep learning model decides the course of action of the rehabilitation device. The designed model achieves an accuracy of 99.6%. A working prototype model of the rehabilitation device has been developed, and the overall success rate of the model is found to be 98.66%. The novelty of the paper lies in i) designing an attention-based CNN-TLSTM model for motor imagery classification and ii) developing a low-cost EEG sensor-driven rehabilitation device for finger and elbow movement rehabilitation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A novel evaluation method for renewable energy development based on improved sparrow search algorithm and projection pursuit model.
- Author
-
Leng, Ya-Jun, Zhang, Huan, and Li, Xiao-Shuang
- Subjects
- *
ENERGY development , *RENEWABLE energy sources , *SEARCH algorithms , *EVALUATION methodology , *CARBON emissions - Abstract
With global climate change posing a major threat to human society, a growing number of countries have taken "carbon-neutral" as a national strategy and proposed a vision of carbon-free future. As an important supplement to traditional fossil energy, renewable energy is the main force to reduce the use of high-carbon energy and carbon dioxide emissions, which will become the trend of social development in the future. Finding the optimal renewable energy source is of particular significance for achieving the net zero emissions. However, the existing evaluation methods of renewable energy sources have obvious shortcomings. In terms of weight calculation methods, such as the randomness of the subjective method is strong and the index weights do not reflect the small changes of the evaluation matrix, which affect the reliability and accuracy of the evaluation result. The existing ranking methods can only achieve the complete ranking of the different objects, but cannot classify the renewable energy technical alternatives into different grades. Given this background, this paper proposes a novel evaluation method for renewable energy plans based on improved sparrow search algorithm and projection pursuit model. Firstly, this paper improves the traditional sparrow search algorithm from three aspects: population initialization, population update and population variation. Then, the projection pursuit model is constructed, and the improved sparrow search algorithm is applied to optimize the projection target to find the optimal projection direction, so as to determine the weight values of each evaluation index. Finally, the weighted rank-sum ratio method is used to select the best renewable energy technical plan, which can not only realize the complete ranking of different plans, but also classify the technical plans into different levels. Based on the actual renewable energy development data from a province in China, experiments were carried out to investigate the effectiveness of the proposed method. Experimental results show that the proposed method performs better than some existing evaluation methods of renewable energy technical plans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Elastic net-based high dimensional data selection for regression.
- Author
-
Chamlal, Hasna, Benzmane, Asmaa, and Ouaderhman, Tayeb
- Subjects
- *
FEATURE selection , *RESEARCH personnel , *VITAMIN B2 , *PREDICTION models - Abstract
High-dimensional feature selection is of particular interest to researchers. In some domains, such as microarray data, it is quite common for a group of highly correlated explanatory variables to be of equal importance for inclusion in the predictive model. This paper proposes a new hybrid feature selection approach that integrates feature screening based on Kendall's tau and Elastic Net regularized regression (K -EN). K -EN as an approach that embeds the Elastic Net, has the advantage of the grouping effect, which automatically includes all the highly correlated variables in the group. The K -EN approach offers insightful solutions to high-dimensional regression problems and improves Elastic Net performance since the screening phase is preceded by a step that further reduces the number of explanatory variables by removing those that disagree with the target based on Kendall's tau. The use of Kendall's tau further enhances Elastic Net performance, as it is robust enough to handle heavy-tailed distributions, non-parametric models, outliers, and non-normal data with greater ease. K -EN is therefore a time-saving approach. The proposed algorithm is evaluated on four simulation scenarios and four publicly available datasets, including riboflavin, eyedata, Longley, and Boston Housing, and achieves 0.2528, 0.0098, 0.1007, and 0.4121 respectively as the Mean Squared Error (MSE). K -EN's MSEs are the best compared to those achieved by the state-of-the-art approaches reviewed in this paper. In addition, K -EN selects up to 100% of relevant features when run on simulated data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Formulation and heuristic method for urban cold-chain logistics systems with path flexibility – The case of China.
- Author
-
Leng, Longlong, Wang, Zheng, Zhao, Yanwei, and Zuo, Qiang
- Subjects
- *
HEURISTIC , *PERISHABLE goods , *EVOLUTIONARY algorithms , *CUSTOMER satisfaction , *AUTOMOTIVE fuel consumption , *CARBON emissions ,TRUCK fuel consumption - Abstract
The focus of this paper is on achieving a win-win situation regarding the economic, environmental, and social impacts of the cold chain logistics terminal distribution system. This paper proposes three multi-objective models to investigate the above effects by incorporating soft time windows, heterogeneous fleets, and path flexibility, with defining the objectives of examining logistics costs, fuel consumption, carbon emissions, quality damage to perishable commodities, and customer satisfaction using six evaluation functions. To solve the proposed models, an efficient optimization framework is developed by combining domain operators with versatile multi-objective evolutionary algorithms (MOEA) to obtain Pareto solutions. Extensive experiments are conducted to test the validity of the concerned model and algorithms. The results demonstrate that: (1) the proposed algorithm is effective in solving the proposed model; (2) the proposed multi-path strategy can effectively improve the performance of cold-chain logistics systems compared to single-path strategies; (3) evaluation functions that assess customer satisfaction greatly affect the performance of cold-chain logistics systems; and (4) the trade-off relationship between the objectives should be investigated to define the model. The paper also provides valuable managerial insights for improving the efficiency and sustainability of cold-chain logistics operations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. An effective metaheuristic technology of people duality psychological tendency and feedback mechanism-based Inherited Optimization Algorithm for solving engineering applications.
- Author
-
Wang, Kaiguang, Guo, Min, Dai, Cai, Li, Zhiqiang, Wu, Chengwei, and Li, Jiahang
- Subjects
- *
OPTIMIZATION algorithms , *METAHEURISTIC algorithms , *CONSTRAINED optimization , *ENGINEERING , *INFORMATION resources , *BENCHMARK problems (Computer science) - Abstract
Nature- and society-inspired metaheuristic algorithms have recently become the most promising technological model. To solve more complex optimization problems and complicated engineering applications, this paper proposes a new people duality psychological tendency and feedback mechanism-based Inherited Optimization Algorithm(IOA), which is inspired by people showing positive-negative duality cognitive tendency and adaptive feedback behavior when selecting information resources with different identity attributes. The IOA algorithm contains the symmetric two exploration phases. The exploitation phase adaptively regulates the dualistic psychological balance of people in inheriting the information resources with better existence value through a feedback regulation mechanism controlled by the profitability awareness to increase population diversity. This paper qualitatively and quantitatively evaluates the optimization performance of IOA on 84 benchmarks, including swarm convergence behavior, effectiveness, convergence, robustness, and significance. The scalability of the IOA is investigated using the CEC2017 suites. The algorithm performance in solving constrained optimization is verified on 8 engineering problems. All statistical results of the IOA are compared with the most promising 12 metaheuristics, which shows that the absolute computational efficiency of IOA on four types of functions is 95%, 96.67%, 80.95%, and 76.92%, respectively, the average rank (rank sum ratio) of IOA is 1.08 (1.19%) among the 13 algorithms, ranking first. The Wilcoxon signed rank test results on the CEC2017 suites show that IOA contains 1437 significance indicators out of 1440 comparisons, with the proportion of significant differences 99.79%, which suggests the proposed IOA maintains efficient search efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A comprehensive review of cyberbullying-related content classification in online social media.
- Author
-
Teng, Teoh Hwai, Varathan, Kasturi Dewi, and Crestani, Fabio
- Subjects
- *
FOLKSONOMIES , *SOCIAL media , *SOCIAL networks , *CYBERBULLYING , *WORKFLOW , *MACHINE learning - Abstract
The emergence of online social networks (OSN) platforms removes communication barriers that are essential to human life, catalyzing social networking growth. However, this emergence has given rise to a negative impact when someone abuses the platform to commit cyberbullying activities. Hence, it is crucial to work on automated cyberbullying-related classification to mitigate the societal phenomena in OSN. The research on the automated classification model for cyberbullying was pioneered over the last decade with growing interest among researchers. It is helpful to track its growth over the decades to elucidate the state-of-arts techniques applied in this field. This paper presents a large amount of literature germane to cyberbullying classification from past to present to provide a comprehensive review. A total of 126 papers were reviewed. This paper emphasizes text-based cyberbullying and multi-modal cyberbullying. The review was presented around the machine learning workflow, encompassing four core sections: dataset analysis, pre-processing analysis, feature analysis, and technique analysis. Based on the critical analysis, limitations are addressed along with the future works that can be conducted to fill the gap in previous research. Furthermore, the review also examined the ethical implications associated with the implementation of these techniques. This review paper is expected to assist readers in fully comprehending the current trend, architecture, and techniques applied to the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Supervised discretization of continuous-valued attributes for classification using RACER algorithm.
- Author
-
Toulabinejad, Elaheh, Mirsafaei, Mohammad, and Basiri, Alireza
- Subjects
- *
DECISION trees , *NAIVE Bayes classification , *CLASSIFICATION algorithms , *DISCRETIZATION methods , *ALGORITHMS , *CLASSIFICATION , *LOGISTIC regression analysis - Abstract
In the contemporary world, data pervades every facet of human life, and the information contained in this data plays a pivotal role in shaping decision-making and advancing technology. Among the plethora of techniques available, classification methods are highly effective tools for extracting valuable insights from vast volumes of data. The R ule A ggregation C lassifi ER (RACER) is a novel rule-based classification algorithm known for its exceptional performance. A notable limitation of RACER lies in its inability to handle continuous features. In this paper, we address the aforementioned limitation by employing various supervised discretization methods, including CAIM, MDLP, Decision Tree (CART), and ChiMerge. The impact of these methods on RACER's accuracy and understandability is evaluated across nine datasets from the UCI repository. Additionally, the paper conducts a comparative analysis of RACER's accuracy against well-known classifiers such as Naive Bayes, Logistic Regression, SVM, LightGBM, and Decision Tree. The findings indicate that RACER achieves the highest average accuracy when we utilize MDLP as the discretization method, surpassing its initial average accuracy. Moreover, RACER demonstrates superior understandability by generating the lowest number of rules when employing ChiMerge and Decision Tree for discretizing numerical features. Furthermore, RACER outperforms the other five classifiers when employing MDLP. • The discretization unit enables the RACER algorithm to handle continuous data. • The discretization unit improves the classification accuracy of the RACER. • The discretization unit increases the understandability of the RACER rules. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Enhancing local citation recommendation with recurrent highway networks and SciBERT-based embedding.
- Author
-
Dinh, Thi N., Pham, Phu, Nguyen, Giang L., and Vo, Bay
- Subjects
- *
LANGUAGE models , *DEEP learning , *NATURAL language processing , *COMPUTATIONAL linguistics - Abstract
When writing academic papers, referencing statements, claims and previous studies is always an important activity. However, it is considered challenging for scientists to find relevant and appropriate scientific articles which are closely related to their current works in order to reference them in their research. As a consequence of the rapid growth in scientific papers being published every year, researchers might easily get overwhelmed within a huge number of resources. One way to help them find the desired references more easily is to use context-aware citation recommendation. The citation recommender system can automatically provide a list of suitable papers as references based on specified inputs which reflect the researchers' interests. Among the outstanding achievements of deep learning and natural language processing in recent years, the utilization of deep neural learning architectures have supported to address the problem of citation recommendation. As the result, the neural citation recommendation area has received much attention from the academic community, with the aim of enhancing the precision and correctness of the results of existing citation recommendation systems. Following this research direction, in our paper we present a novel context-aware citation recommendation model, called RHN-DualLCR (Recurrent Highway Networks – Dual Local Citation Recommendation), which integrates Recurrent Highway Networks (RHN), an improved model of the original Bidirectional Long Short-Term Memory (BiLSTM) architecture, and uses a SciBERT-based (Science text of Bidirectional Encoder Representations from Transformers) embedding layer to build up the efficiency of the state-of-the-art local citation recommendation model, which enriches context representation with global information. Our research demonstrates its originality and relevance because we used one of the latest achievements of deep models (RHN model) and natural language processing (SciBERT) applied to the citation recommendation problem. We have conducted experiments on the RHN-DualLCR model on 3 widely known datasets for the citation recommendation problem: ACL-200 (Association for Computational Linguistics), ACL-600 and RefSeer, and also used 2 common evaluation standards Mean Reciprocal Rank (MRR) and the Recall@K (R@K for short) to evaluate the performance of our model. Experimental results show that our proposed model is 3% to 16% better than the original models or state-of-the-art models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. RA-HGNN: Attribute completion of heterogeneous graph neural networks based on residual attention mechanism.
- Author
-
Zhao, Zongxing, Liu, Zhaowei, Wang, Yingjie, Yang, Dong, and Che, Weishuai
- Subjects
- *
COMPLETE graphs , *INFORMATION networks , *ATTENTION , *GRAPH algorithms - Abstract
Heterogeneous graphs, which are also called heterogeneous information networks, analyze the different types of nodes in an information network and the different types of links between them to accurately tell the difference between different semantics. In recent years, there have been several GNN-based models to process heterogeneous graph data and achieve good performance. The model faces the challenge of first considering how to deal with the challenges posed by embedding different types of nodes in a heterogeneous graph; secondly, analyzing the node attribute information, which requires satisfying all nodes with attributes, which is not easy to achieve due to the existence of individual nodes and their neighbors that do not carry attributes. Previous network structures have added attributes to nodes by handcrafted methods, thus neglecting the overall learnability of the model, which in turn leads to poor performance. This paper analyzes the reasons for this phenomenon and aims to design a learning-competent heterogeneous graph neural networks(HGNN) framework. The understanding in this study embeds different types of nodes into the same feature space for node embedding, using the topological embedding of heterogeneous graphs as a guide to complete the process of complementing non-attributed nodes through learnable ways in the model and the use of residual attention mechanisms to handle attributes between nodes. Therefore, this paper proposes a general framework for Attribute Completion of Heterogeneous Graph Neural Network Based on Residual Attention Mechanism (RA-HGNN) , and combines it with other GNN models to enable end-to-end execution of the entire model. Experimental verification was completed on real-world data sets to prove the feasibility of the model, and the experimental results showed state-of-the-art performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Multi-lead-time short-term runoff forecasting based on Ensemble Attention Temporal Convolutional Network.
- Author
-
Zhang, Chunxiao, Sheng, Ziyu, Zhang, Chunlei, and Wen, Shiping
- Subjects
- *
RUNOFF , *LEARNING strategies , *LEAD time (Supply chain management) , *WATERSHEDS - Abstract
In the realm of ecological management and human activities within river basins, short-term runoff forecasting plays a pivotal role. Addressing this need, this paper introduces an innovative framework for short-term runoff forecasting: the Ensemble Attention Temporal Convolutional Network (EA-TCN). The cornerstone of this innovation lies in the effective amalgamation of Temporal Convolutional Network (TCN), lightweight attention mechanism, and ensemble learning strategy. This integration synergistically enhances the model's overall performance in terms of accuracy, efficiency, and robustness. TCN forms the foundation of this framework, where its efficient architecture, characterized by shared parameters and parallel computation, significantly boosts computational efficiency. Its employment of causal and dilated convolutions adeptly captures long-term dependencies within time series inputs. The incorporated lightweight attention mechanism further augments the TCN, enabling EA-TCN to precisely discern complex relationship in temporal data, particularly exhibiting remarkable temporal robustness across various forecasting horizons—a feat challenging for conventional forecasting approaches. Additionally, the integration of the Snapshot ensemble method within the framework allows for simulating the effect of training multiple models through one single training process, thus further elevating the model's accuracy and robustness. Rigorous ablation and comparative experiments conducted on the US Columbia River dataset substantiate our claims. The results not only validate the individual merits of each component within EA-TCN but also illuminate the significant advantages of their collective application. Our comprehensive assessment unequivocally demonstrates the framework's exceptional performance in short-term runoff forecasting, positioning it as a state-of-the-art solution in this field. We will further discuss its impact of vocation education in this industry. • This paper applied the modified TCN to short-term runoff forecasting. • EA-TCN adopts a lightweight plug-and-play attention module in the time dimension. • The Snapshot ensemble method is also applied to our proposed model. • EA-TCN can make accurate predictions for multiple different lead times. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. An intelligent instantaneous selective method, through compacted ZnO nanoparticle pellets, based on the concept of a virtual electronic nose, for different volatile organic compounds.
- Author
-
Bouricha, Brahim, Souissi, Riadh, and El Mir, Lassaad
- Subjects
- *
ELECTRONIC noses , *NANOPARTICLES , *CHEMICAL detectors , *PRINCIPAL components analysis , *ZINC oxide , *VOLATILE organic compounds , *ETHANOL , *TOLUENE - Abstract
This paper exercises a novel procedure with virtual e-nose (VEN) systems using a single compacted nanoparticle ZnO chemical sensor. The pellets are formed by a nano-powders synthesized via a simple sol–gel method. Furthermore, in this paper task we show the transient differences in the dynamic response curves for ZnO pellet when exposed to volatile organic compounds (VOCs) namely ethanol, methanol, isopropanol, acetone and toluene. VOCs are categorized using the transient response of a single sensor at four different operating temperatures, offering diverse features that came from the reaction mechanism of the target molecule. The relevant attributes of responses were run through Ascending Hierarchical Classification integrated with Principal Component Analysis. Three clusters classified for three specific features subsets were distinguished. A new mathematical iteration of this hybrid process was performed and leads to good HAC output stability. The result is delivered automatically with three-digit sorts in a specified order after thorough implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Analysis and application of rectified complex t-spherical fuzzy Dombi-Choquet integral operators for diabetic retinopathy detection through fundus images.
- Author
-
Kakati, Pankaj, Quek, Shio Gai, Selvachandran, Ganeshsree, Senapati, Tapan, and Chen, Guiyun
- Subjects
- *
INTEGRAL operators , *FUZZY integrals , *DIABETIC retinopathy , *FUZZY arithmetic , *MULTIPLE criteria decision making , *TRIANGULAR norms , *FUZZY sets - Abstract
This paper proposes a rectified complex spherical fuzzy set (rCTSFS) model that enables the phase term of a complex number to function truthfully to its inherent meaning of representing directions, phases, or color hues. In addition, this paper proposes the score and accuracy functions for rectified complex spherical fuzzy numbers (rCTSFn), which allows the three types of membership degrees of an rCTSFn to fulfill human judgment/intuition. The proposed rCTSFS model proves to be a productive extension of the complex spherical fuzzy set (CSFS), complex fuzzy set (CFS), and spherical fuzzy set (SFS) models. On the other hand, Dombi t -norms prove more flexible and comprehensive than some of the other families of triangular norms, such as the algebraic t -norms and the Einstein t -norms, due to the presence of a parameter γ. The parameter γ determines the amount of aggressiveness at estimating the maximum and the minimum of a population based on a sample obtained. Therefore, this paper proposes two Dombi-Choquet integral operators, namely, the rectified complex t -spherical fuzzy arithmetic Dombi-Choquet integral (r C T S F A γ , w λ ) operator and the rectified complex t -spherical fuzzy geometric Dombi-Choquet integral (r C T S F G γ , w λ ) operator. A multi-criteria decision making (abbr. MCDM) algorithm utilizing the two Dombi-Choquet integral operators is innovated. The proposed Dombi-Choquet MCDM algorithm for the rCTSFS model is applied to an MCDM problem related to diabetic retinopathy detection on five real-life fundus images of different severity levels taken from the Messidor2 dataset. Our newly proposed algorithm proves to be the only algorithm that yields the correct diagnostic results that match the hard truth. On the other hand, none of the 50 algorithms observed among recent works in literature can produce the correct diagnostic results, even after lending the fuzzification procedure innovated in this work to them. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A dual-level multi-attribute group decision-making model considering interaction factors based on CCSD and MARCOS methods with R-numbers.
- Author
-
Cheng, Rui, Fan, Jianping, Wu, Meiqing, and Seiti, Hamidreza
- Subjects
- *
GROUP decision making , *COGNITIVE maps (Psychology) , *SHARED virtual environments , *RISK assessment , *STANDARD deviations - Abstract
The current body of research on multi-attribute group decision-making (MAGDM) exhibits some limitations, namely the utilization of the single attribute hierarchy and the assumption of attribute independence. This paper presents a novel approach, referred to as the R-DLMAGDM (R-numbers Dual-Level Multi-Attribute Group Decision-Making) model with interaction factors, which integrates the advantages of R-numbers in risk evaluation. The proposed model aims to address the limitations related to attribute hierarchies and the assumption of attribute independence. The first step in the new model involves establishing the entropy model with R-numbers to determine the expert weights. Subsequently, the R-numbers generalized weighted arithmetic average (RNGWAA) operator and the R-numbers generalized weighted geometric average (RNGWGA) operator are introduced to combine the information provided by the experts. Next, the application of the enhanced correlation coefficient and standard deviation (CCSD) method is utilized to ascertain the relative weights of the dual-level attributes using the inverse order concept. Then, the fuzzy cognitive map (FCM) is employed to evaluate the interrelationships among components in order to derive the final attribute weights. Additionally, the paper discusses the implementation of the R-DLMAGDM model for the assessment of risk in a virtual supply chain in the metaverse. The evaluation process entails the prioritizing of five different alternatives based on four Level 1 criteria and seven Level 2 criteria. The model's flexibility and validity are showcased through the execution of comprehensive sensitivity analysis and three-dimensional comparative analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Causal explaining guided domain generalization for rotating machinery intelligent fault diagnosis.
- Author
-
Guo, Chang, Zhao, Zhibin, Ren, Jiaxin, Wang, Shibin, Liu, Yilong, and Chen, Xuefeng
- Subjects
- *
ROTATING machinery , *GENERALIZATION , *FAULT diagnosis , *RELIABILITY in engineering , *DEEP learning , *TRUST , *ARTIFICIAL intelligence - Abstract
Post-hoc explaining approaches for deep learning (DL) models has attracted much attention in safety–critical applications such as rotating machinery intelligent fault diagnosis (IFD). However, with the help of the explanation techniques, the models are still fragile to domain shifts caused by varying speeds and loads without help for improving their cross-domain performance. Since humans in the decision-making loop are essential for a reliable diagnostic system to determine the reliability of diagnosis, this paper proposes a causal explaining guided domain generalization (CXDG) method to realize trust worthy IFD with human in the decision loop. Specifically, an explaining model is trained with the conditional mutual information, which is a causal strength metric, and utilized to tell the causal features in the input data as the attributions of the diagnostic model. A translation process of the attributions is proposed to make the explaining process understandable. Furthermore, the aim of this paper is not only explaining but also beyond that the diagnostic model is guided to focus on the causal features to improve its generalization ability in unseen domains. The effectiveness of the method is validated on two experiment datasets. The results show that the proposed method can both explain the attributions of the diagnosis model and be beneficial to the generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Deep reinforcement learning for cooperative robots based on adaptive sentiment feedback.
- Author
-
Jeon, Haein, Kim, Dae-Won, and Kang, Bo-Yeong
- Subjects
- *
DEEP reinforcement learning , *REINFORCEMENT learning , *PSYCHOLOGICAL feedback , *HUMAN-robot interaction , *GROUP work in education , *ARTIFICIAL intelligence , *ROBOTS - Abstract
Human–robot cooperative tasks have gained importance with the emergence of robotics and artificial intelligence technology. In interactive reinforcement learning techniques, robots learn target tasks by receiving feedback from an experienced human trainer. However, most interactive reinforcement learning studies require a separate process to integrate the trainer's feedback into the training dataset, making it challenging for robots to learn new tasks from humans in real-time. Furthermore, the types of feedback sentences that trainers can use are limited in previous research. To address these limitations, this paper proposes a robot teaching strategy that uses deep RL via human–robot interaction to learn table balancing tasks interactively. The proposed system employs Deep Q-Network with real-time sentiment feedback delivered through the trainer's speech to learn cooperative tasks. We designed a novel reward function that incorporates sentiment feedback from human speech in real-time during the learning process. The paper presents an improved reward shaping technique based on subdivided feedback levels and shrinking feedback. This function serves as a guide for the robot to engage in natural interactions with humans and enables it to learn the tasks effectively. Experimental results demonstrate that the proposed interactive deep reinforcement learning model achieved a high success rate of up to 99.06%, outperforming the model without sentiment feedback. • A robot teaching strategy for cooperative tasks via human–robot interaction is proposed. • Deep Q-Network is employed with real-time feedback delivered by the trainer's speech. • A novel reward function is designed to guide the robot in natural interactions. • The function incorporates the feedback with improved reward-shaping techniques. • Experiments on a robot proved the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. An improved meta heuristic IT2 fuzzy model for nondestructive failure evaluation of metal additive manufacturing lattice structure.
- Author
-
Wen, Yintang, Ren, Yaxue, Zhang, Yuyan, and Zhang, Zhiwei
- Subjects
- *
NONDESTRUCTIVE testing , *STRUCTURAL failures , *METAL fractures , *YIELD stress , *PARTICLE swarm optimization , *METAHEURISTIC algorithms - Abstract
Metal Additive Manufacturing (AM) lattice structure is widely used in various fields because of its lightweight and one-time forming characteristics. Nondestructive reliability evaluation still presents a challenge that requires immediate attention, especially in light of the service setting and when structural failure is used as an evaluation criterion. In this paper, the failure evaluation problem of structure is discussed based on the defects that affect yield stress, and the yield stress as the index. Firstly, a Computed Tomography (CT) scanner is used for non-destructive testing of metal AM lattice structure samples, and improved YOLO V7 is used to identify and count defects in CT scan images. The interval type 2 (IT2) fuzzy model is then enhanced using the irregular Gaussian function, and a meta-heuristic algorithm (particle swarm optimization) is added to enhance the model's predictive capabilities. Finally, simulation and experiment are used to validate the AM lattice structure's failure evaluation. The improved model's superiority in the simulation exercise is demonstrated by contrasts with IT2 and the Type 1 fuzzy model. The yield stress prediction deviation of the sample during application verification is 1.12%, which attests to the validity of the failure prediction technique suggested in this paper. • The IT2 fuzzy model is used to predict the failure of lattice structures. • An irregular Gaussian function is designed as model principal membership function. • Simulation data is used instead of actual testing to complete the yield stress prediction. • Use improved YOLO V7 model to detect internal defects in actual samples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Profit maximizing business model for electric vehicle industry incorporating fuzziness in the environment: Encompassing a case study from silicon valley of India.
- Author
-
Bhakuni, Pooja and Das, Amrit
- Subjects
- *
ELECTRIC vehicle industry , *INCORPORATION , *SUSTAINABLE transportation , *FUZZY numbers , *GENETIC algorithms - Abstract
Electric vehicles (EVs) are among the most revolutionary breakthroughs by the context of decarbonization of road transport. The adoption of EVs has drawn increasing attention and is currently considered a potential route towards sustainable transportation. Businesses approaching the EV market must establish lucrative business models that overcome the adoption obstacles for EVs while assuring long-term growth. The intent of the current research exploration is to create a cutting-edge framework for EV enterprises that aims to maximize profit and save delivery time leading to high customer satisfaction. The volume discount strategy is used for profit escalation. The model presented in this paper is highly adaptable for a real-world problem as it incorporates uncertainty in the EV industry through generalized triangular neutrosophic numbers, an extension of fuzzy numbers. A modified version of the neutrosophic compromise programming approach is proposed in this paper. The results from this approach are validated using genetic algorithm. The proposed optimization-based framework is encapsulated in the form of study of a case from Bangalore city, in India. An in-depth exploration of two cases along with sensitivity analysis is carried out that helped in prioritizing some discrete customers and gave insights on which manufacturing site should be designated more focus. • Profit maximizing business model for electric vehicle (EV) merchants is proposed. • Uncertainty in EV industry is captured using fuzzy number. • A modified neutrosophic compromise programming technique is proposed. • Case study is considered involving particular case and sensitivity analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Iterative integer linear programming-based heuristic for solving the multiple-choice knapsack problem with setups.
- Author
-
Adouani, Yassine, Masmoudi, Malek, Jarray, Fethi, and Jarboui, Bassem
- Subjects
- *
KNAPSACK problems , *LINEAR programming , *INTEGER programming , *HEURISTIC , *INTEGERS - Abstract
This paper studies the multiple-choice knapsack problem with setup (MCKS) which is a generalization of the knapsack problem with setup (KPS) where the items can be processed in multiple periods. The Integer Linear Programming (ILP) of the MCKS shows its limitations when solved using CPLEX 12.7 solver, and the existing algorithms VNS&IP and VND-LB from the literature outperform the ILP and provide the best-known solutions but solve to optimality only 29 out of 120 instances, respectively. This paper is dedicated to exposing an Iterative Integer linear programming-based heuristic (IILP-H) that deals with the MCKS. The IILP-H has been experimented on the MCKS benchmark instances. A sensitivity analysis of the MCKS parameters is provided. A comparison of the IILP-H with the upper bound obtained by the CPLEX 12.7 solver and the best state-of-the-art algorithms has been conducted. The numerical results show that the IILP-H outperforms all the existing solving techniques when it comes to solution quality and computation time. It reaches 120 out of 120 best solutions, including 77 out of 120 optimal and 69 new best solutions. The complexity of the MCKS and the nature of the solutions obtained by the IILP-H are studied regarding the number of periods to which a family is assigned. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Analysis of behavioural curves to classify iris images under the influence of alcohol, drugs, and sleepiness conditions.
- Author
-
Causa, Leonardo, Tapia, Juan E., Valenzuela, Andres, Benalcazar, Daniel, Droguett, Enrique Lopez, and Busch, Christoph
- Subjects
- *
IRIS (Eye) , *BEHAVIORAL assessment , *DROWSINESS , *WAKEFULNESS , *DRUG utilization , *DATABASES - Abstract
This paper proposes a new method to estimate behavioural curves from Near-Infra-Red (NIR) iris images for classifying Fitness for Duty using a biometric capture device. Fitness for Duty (FFD) techniques detect whether a subject is Fit to safely perform a given task, which means no reduced alertness condition and security, or the subject is unfit, that could impact a reduced alertness condition by sleepiness or consumption of alcohol and drugs. The analysis showed essential differences in pupil and iris behaviour to classify the workers in "Fit" or "Unfit" conditions. The best results can distinguish subjects robustly under alcohol, drug consumption, and sleep conditions. The Multi-Layer-Perceptron and Gradient Boosted Machine reached the best results in all groups with an overall accuracy for Fit and Unfit classes of 74.0% and 75.5%, respectively. These results open a new application for iris capture devices. • This work is a step forward in Iris biometric applications. • This research proposes a new database and method to detect Fitness for Duty. • This paper proposes a new method to estimate behavioural curves from iris images • Fitness for Duty allows us to detect Fit/Unfit subjects and save lives • This method classifies alcohol, drug, and sleepiness using an iris image. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Introducing NBEATSx to realized volatility forecasting.
- Author
-
Souto, Hugo Gobato and Moradi, Amir
- Subjects
- *
STOCK price indexes , *TIME-varying networks ,DEVELOPING countries - Abstract
This paper investigates the application of neural basis expansion analysis with exogenous variables (NBEATSx) in the prediction of daily stock realized volatility for various time steps. It compares NBEATSx's forecasting accuracy and robustness with several commonly used models, namely Long-Short Term Memory (LSTM) network, Temporal Neural Network (TCN), HAR, GARCH, and GJR-GARCH models. In this research, a total of six distinct stock indexes, three error measures, and four statistical tests are used, while three robustness tests are conducted to verify the outcomes of this paper. The findings of this research show that NBEATSx consistently yields statistically more accurate and robust forecasts than the other considered models. On average, NBEATSx generates forecasts that are respectively 13% and 8% more accurate for medium-term and long-term forecasting. Additionally, it produces forecasts that are respectively 43%, 60%, and 59% more robust for short-term, medium-term, and long-term forecasting. Yet, it should be noted that the superiority of NBEATSx in terms of forecast accuracy is not evident when applied to stock indexes from developing countries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Predicting and understanding long-haul truck driver turnover using driver-level operational data and supervised machine learning classifiers.
- Author
-
Correll, David HC
- Subjects
- *
TRUCK drivers , *MACHINE learning , *SUPPORT vector machines , *RANDOM forest algorithms , *SUPPLY chain disruptions , *ELECTRONIC equipment - Abstract
Truck drivers are essential to modern supply chains. However, the companies that employ them are having trouble keeping them. At many trucking firms, annual turnover rates approach 100 percent or higher. This paper develops a method for predicting individual truck driver turnover events before they happen by applying supervised machine learning classifiers to a new source of operational truck driver data. Our paper uses Electronic Logging Device (ELD) data, which comprise newly federally-mandated, time-stamped work logs collected from approximately 1200 American long-haul truck drivers over 3 years. We train three supervised machine learning classifiers (logistic regression, random forests, and support vector machines) on this data and achieve 60 to 70 percent prediction accuracy and 50 to 60 percent recall across two 5-fold cross-validated experiments. We observe that the quantity and consistency of week-day driving assignments explain most of our models' predictive power. We offer these results as both a new technical tool, as well as novel managerial insights for improving global truck driver retention problems for the benefit of supply chains worldwide. [Display omitted] • Newly available operational data from long haul truck drivers is analyzed. • Three machine learning classifiers are trained on the operational data. • Predictive models achieve 60–70 percent accuracy, 50–60 percent recall. • Models show potential to enable interventions to retain truck drivers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Blockchain-based public auditing with deep reinforcement learning for cloud storage.
- Author
-
Li, Jiaxing, Wu, Jigang, Jiang, Lin, and Li, Jin
- Subjects
- *
CLOUD storage , *REINFORCEMENT learning , *DEEP reinforcement learning , *DATA integrity , *AUDITING - Abstract
Public auditing enables auditors to remotely verify data integrity for the outsourced data, which is an essential security issue and a promising solution for reliable cloud storage. However, in cloud storage systems, most existing public auditing schemes adopt a static auditing policy in the blockchain network, so that they are not able to adapt the dynamic environment efficiently, i.e., dynamic attacks, users joining and leaving, etc. Moreover, it is hard to improve the scalability with a static auditing policy, which may result in low performances and high security risks for the blockchain-based public auditing. In order to overcome the above limitations, we present a deep reinforcement learning-based method to improve the efficiency (i.e., transaction throughput and network latency) for the current blockchain-based public auditing solutions. Specifically, an blockchain-based security protocol is firstly proposed to guarantee that the integrity of outsourced data can be verified based on the dynamically auditing policy. Then, re-auditing time interval, the number of public auditors and the size of blocks are adjusted by the proposed deep reinforcement learning-based method to improve performance and security in a long term. Finally, security analysis indicates that the proposed work is able to resist the malicious entities and attacks derived from the Proof-of-Work consensus mechanism. Results of experiments that conducted in this paper demonstrate that the proposed scheme outperforms the baseline schemes on both transaction throughput and network latency. • This paper adapts dynamic blockchain network by proposing a DRL-based model. • It minimizes long-term overhead for dynamic public auditing in cloud storage. • It ensures the security of dynamic public auditing by a theoretical analysis. • It obtains better performances on transaction throughput and network latency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Weakly-supervised learning based automatic augmentation of aerial insulator images.
- Author
-
Jiang, Di, Cao, Yuan, and Yang, Qiang
- Subjects
- *
DETECTORS , *ANGLES , *ANNOTATIONS - Abstract
The automatic detection of insulator defects with UAV and CNN-based detectors has become a popular paradigm in recent years. However, insufficient insulator data has always been the bottleneck of detector performance. The existing augmentation method either performs a whole image transformation that lacks new semantic features or generates samples with massive manual annotation. Therefore, this paper proposes an automatic augmentation method called Weakly-Supervised Segmentation Mix (WSSM), where a Foreground Segmentation Network (FSN) is trained under the supervision of the bounding box label to extract the insulators for new sample synthesis. In the FSN training process, UnionMix is designed to generate hard samples based on the pseudo-label, thus facilitating the FSN segmentation ability for insignificant insulator boundaries. Oriented Muti-Instance Loss (OMIL) is proposed to extract supervision from the oriented bounding box so that FSN can be fully trained to handle the diverse angle distribution of insulators. The experiments conducted on the Aerial Insulator Dataset (AID) indicate that the synthesized images of WSSM can achieve stable improvement on multiple mainstream detectors. Both in-domain and cross-domain backgrounds can be used in WSSM to promote the network. For example, the AP of YOLOv5-m can be improved from 68.14 to 69.73 with 2600 COCO images, which exceeds the AP of bassline YOLOv5-l (69.69). To further verify the foreground extraction capability, this paper takes the FSN result as the pseudo-label and trains the instance segmentation network on iSAID. The comparison with existing SOTA methods proves the superior quality of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Point cloud enhancement optimization and high-fidelity texture reconstruction methods for air material via fusion of 3D scanning and neural rendering.
- Author
-
Hu, Qichun, Wei, Xiaolong, Zhou, Xin, Yin, Yizhen, Xu, Haojun, He, Weifeng, and Zhu, Senlin
- Subjects
- *
POINT cloud , *DATA acquisition systems , *SURFACE texture , *SURFACE reconstruction - Abstract
In order to realize the digital management and manufacturing of air material, the methods of 3D point cloud enhancement optimization and high-fidelity texture reconstruction in entity digitization of air material based on the fusion of 3D scanning and neural rendering are proposed. In this paper, an automatic data acquisition system is designed and built, and the acquired training images are enhanced. Then the Yolov8 segmentation model and CascadePSP boundary refinement model are combined for training to realize foreground segmentation and background removal of air material images. In this paper, a 3D point cloud enhancement optimization method is proposed, which combines the point cloud obtained by binocular structured light scanner and the point cloud reconstructed by Colmap algorithm for registration and fusion. By combining the coarse registration of PointNetLK network with the fine registration of ICP algorithm, a more complete and high-quality dense point cloud is generated, and the effects of illumination conditions and image quantity on point cloud reconstruction is studied experimentally. Using the optimized point cloud model and random perspective enhancement (RPE) method, the neural rendering model NeuS is improved, which improves the 3D surface texture reconstruction quality and the sight extrapolation effect. In order to verify the effectiveness of the proposed method, comparison and ablation experiments are carried out in this paper. The experimental results show that point cloud enhancement optimization and RPE have a good effect on improving the model, and the model performance has been greatly improved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. An efficient hardware architecture of integer motion estimation based on early termination and data reuse for Versatile video coding.
- Author
-
Zhang, Jun, Zhang, Yu, and Zhang, Hao
- Subjects
- *
VIDEO coding , *INTEGERS , *SEARCH algorithms , *COMPUTATIONAL complexity , *HARDWARE , *PIXELS - Abstract
The integer motion estimation (IME) involves high computational complexity and large amount of computation data due to the variable block sizes, and it is one of the most critical bottlenecks in video coding. The traditional algorithm for integer motion estimation loses search accuracy when simplifying the motion search process, or it becomes complex to pursue search accuracy, which is not conducive to video encoding and transmission. This paper presents a data reuse algorithm based on search window division, block splitting, the arrangement of search points in the Test Zone (TZ) search algorithm and block division. which divides pixel blocks of different sizes into several 8x8 pixel blocks and then reuses the overlapping 8x8 pixel blocks between adjacent search points of the Test Zone (TZ) search algorithm to reduce hardware resource consumption. By comparing the Sum of Absolute Differences (SAD) values of left and top pixel blocks with current pixel block, the integer motion estimation (IME) can be terminated in advance, reducing the computation complexity of integer motion estimation by over 70 %. Furthermore, the paper also presents a hardware architecture based on the data reuse algorithm, which can process 7680 × 4320@60fps videos at an operating clock frequency of 182.99 MHz, with lower resource consumption compared to similar hardware designs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. An integrated method for complex heterogeneous multi-attribute group decision-making and application to photovoltaic power station site selection.
- Author
-
Wan, Shu-Ping, Wu, Hao, and Dong, Jiu-Ying
- Subjects
- *
SOLAR power plants , *GROUP decision making , *PROSPECT theory , *FUZZY sets , *LINEAR programming , *BUILDING-integrated photovoltaic systems , *REAL numbers , *PHOTOVOLTAIC power systems - Abstract
This paper abstracts the complex heterogeneous multi-attribute group decision-making (CHMAGDM) characterized by two-layer decision-makers (DMs), individual attribute sets, complex relationships among attributes and heterogeneous evaluation information. DMs are composed by internal decision-makers (IDMs) and external decision-makers (EDMs). The evaluation information is represented by linguistic variables (LVs), intervals and real numbers. To handle such CHMAGDM, this paper puts forward an integrated method of prospect theory, DEMATEL (decision-making trial and evaluation laboratory) and QUALIFLEX (qualitative flexible). Firstly, a novel reference point determining approach is proposed based on three special indicators. A linguistic variable inverse prospect value function is designed and introduced to the transformation process from LVs to clouds to reduce the impacts of DMs' psychological behaviors on decision-making. Secondly, a cloud-based DEMATEL approach is proposed to determine the attribute weights, where LVs are employed to judge the relationships among attributes. A new linguistic scale function is initiated and introduced to another transformation process from clouds to LVs. A representative indicator of cloud is designed for facilitating the normalization of direct relation matrix. Moreover, a linear programming model is built to calculate the attribute weights. Thirdly, an extended QUALIFLEX approach is developed to obtain the optimal ranking and EDM weights, where adoption coefficient is provided by IDMs. Another linear programming model is constructed to calculate the optimal collective concordance/discordance index of each permutation. The optimal ranking is generated by the maximum collective concordance/discordance index. Finally, a photovoltaic power station site selection example validates the proposed method. The advantages are demonstrated with sensitivity and comparison analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. An improved levy chaotic particle swarm optimization algorithm for energy-efficient cluster routing scheme in industrial wireless sensor networks.
- Author
-
Luo, Tao, Xie, Jianpeng, Zhang, Baitao, Zhang, Yao, Li, Chaoqun, and Zhou, Jie
- Subjects
- *
WIRELESS sensor networks , *PARTICLE swarm optimization , *NETWORK routing protocols , *INTERNET of things - Abstract
In the last few years, with the persistent improvement of Industrial Wireless Sensor Networks (IWSNs), the application of Industrial Internet of Things (IIoT) has ended up progressively far reaching. However, in IWSNs, the sensor nodes have limited energy and lifespan, so how to build an efficient routing protocol has become an important issue. Cluster routing is a good way to make the network use less energy and last longer. In this paper, an improved levy chaotic particle swarm optimization-based cluster routing protocol (LCPSO-CRP) is proposed creatively for IWSNs, which effectively prolongs system lifetime. Then, a chaotic optimization strategy is designed, which greatly improves convergence speed and expands search space of LCPSO-CRP. In a serial of comparative experiment, LCPSO-CRP has advantages over existing protocols due to its distinctive utilization of both levy flight strategy and chaotic optimization strategy. In addition, this paper proposes a new efficient clustering routing model for IWSNs, considering the intra-cluster distance, base station distance, cluster heads energy, and the cluster members energy. To evaluate the effectiveness of the protocol, new extensive experiments are conducted, reflecting actual industrial scenario exactly. Experimental results demonstrate that compared with traditional cluster routing protocols including LEACH, LEACH-C, SEP, DEEC, and LEACH-kmeans, the energy consumption with LCPSO-CRP has decreased by 22.91% at least, and the network lifetime of IWSNs has increased by 13.93% at least. • This paper proposes a new LCPSO-CRP protocol for efficient cluster routing in IWSNs. • Propose a new energy-efficient cluster routing model and a novel objective function. • Design a new chaotic optimization strategy and a novel levy flight strategy. • Devise a new experiment, validating the efficacy of the proposed LCPSO-CRP. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. On the trade-off between ranking effectiveness and fairness.
- Author
-
Melucci, Massimo
- Subjects
- *
INFORMATION storage & retrieval systems , *FAIRNESS , *SYMMETRIC matrices , *AUTHOR-editor relationships , *ACCESS to information - Abstract
This paper addresses the problem of maximizing the effectiveness of the ranking produced by information retrieval or recommender systems and at the same time maximizing two fairnesses, that of the group and that of the individual. The context of this paper is therefore that of access to information carried out by users, who aim to satisfy their own information needs, to documents produced by authors and curators who aim to be exposed in a fair manner, i.e. without discriminating between groups nor individuals. The paper describes a general method based on the spectral decomposition of mixtures of symmetric matrices, each of which represents a variable to be maximized, and experiments conducted with the use of a test collection. The method described in this paper has explained if and how the trade-offs between effectiveness, group fairness and individual fairness manifest themselves. The experimental results show that maintaining an acceptable level of effectiveness and fairness at the same time is feasible and (b) the trade-offs exist but the order of magnitude of the variations depends on the measure of effectiveness used and therefore by what the user's model of access to information is as well as on the fairness measures and therefore on how authors or editors should be exposed. • Modern information access systems should balance fairness and effectiveness. • A single eigensystem achieves simultaneous maximization. • Fairness, effectiveness, and access measures are crucial in trade-offs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Cybersecurity threats in FinTech: A systematic review.
- Author
-
Javaheri, Danial, Fahmideh, Mahdi, Chizari, Hassan, Lalbakhsh, Pooia, and Hur, Junbeom
- Subjects
- *
FINANCIAL technology , *CYBERTERRORISM , *INTERNET security , *ARTIFICIAL intelligence , *DATA privacy , *COMPUTER crime prevention - Abstract
• Adopting PRISMA methodology to investigate cybersecurity threats in FinTech. • Identifying the most effective defense strategies to negate the threats by examining 74 published papers. • Comparing the threats and defenses from different perspectives, including their impacts and technical details. • Proposing a novel and refined taxonomy of security threats and defense strategies in FinTech. • Recommending future research directions to address existing security gaps in current FinTech systems. The rapid evolution of the Smart-everything movement and Artificial Intelligence (AI) advancements have given rise to sophisticated cyber threats that traditional methods cannot counteract. Cyber threats are extremely critical in financial technology (FinTech) as a data-centric sector expected to provide 24/7 services. This paper introduces a novel and refined taxonomy of security threats in FinTech and conducts a comprehensive systematic review of defensive strategies. Through PRISMA methodology applied to 74 selected studies and topic modeling, we identified 11 central cyber threats, with 43 papers detailing them, and pinpointed 9 corresponding defense strategies, as covered in 31 papers. This in-depth analysis offers invaluable insights for stakeholders ranging from banks and enterprises to global governmental bodies, highlighting both the current challenges in FinTech and effective countermeasures, as well as directions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A comprehensive survey on applications of transformers for deep learning tasks.
- Author
-
Islam, Saidul, Elmekki, Hanae, Elsebai, Ahmed, Bentahar, Jamal, Drawel, Nagat, Rjoub, Gaith, and Pedrycz, Witold
- Subjects
- *
ARTIFICIAL neural networks , *DEEP learning , *TRANSFORMER models , *NATURAL language processing , *RECURRENT neural networks , *COMPUTER vision - Abstract
Transformers are Deep Neural Networks (DNN) that utilize a self-attention mechanism to capture contextual relationships within sequential data. Unlike traditional neural networks and variants of Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), Transformer models excel at managing long dependencies among input sequence elements and facilitate parallel processing. Consequently, Transformer-based models have garnered significant attention from researchers in the field of artificial intelligence. This is due to their tremendous potential and impressive accomplishments, which extend beyond Natural Language Processing (NLP) tasks to encompass various domains, including Computer Vision (CV), audio and speech processing, healthcare, and the Internet of Things (IoT). Although several survey papers have been published, spotlighting the Transformer's contributions in specific fields, architectural disparities, or performance assessments, there remains a notable absence of a comprehensive survey paper that encompasses its major applications across diverse domains. Therefore, this paper addresses this gap by conducting an extensive survey of proposed Transformer models spanning from 2017 to 2022. Our survey encompasses the identification of the top five application domains for Transformer-based models, namely: NLP, CV, multi-modality, audio and speech processing, and signal processing. We analyze the influence of highly impactful Transformer-based models within these domains and subsequently categorize them according to their respective tasks, employing a novel taxonomy. Our primary objective is to illuminate the existing potential and future prospects of Transformers for researchers who are passionate about this area, thereby contributing to a more comprehensive understanding of this groundbreaking technology. • The paper presents a comprehensive survey on transformers for deep learning tasks. • The paper conducts a thorough analysis on highly effective models in five domains. • The paper classifies the models based on respective tasks using a proposed taxonomy. • The characteristics of the surveyed models are deeply explored and analyzed. • Future directions and challenges for transformer-based models are deciphered. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. A framework for predicting breast cancer recurrence.
- Author
-
Hussein, Mahmoud, Elnahas, Mohammed, and Keshk, Arabi
- Subjects
- *
CANCER relapse , *BREAST cancer , *ARTIFICIAL neural networks , *BREAST cancer prognosis , *DISEASE relapse - Abstract
Breast cancer is one of the serious diseases that threaten the life of many women worldwide. The seriousness of this disease is that it is often discovered in late stages after a period of its occurrence. This causes a wide spread of the disease and difficulty in its treatment. Another important characteristic of this disease is that it can return again after a period of its treatment. Therefore, predicting the occurrence or recurrence of such disease early is the best solution to have a high cure rate. The main objective of this paper is to improve the prediction performance of the breast cancer recurrence. Many methods have been proposed to predict breast cancer recurrence. However, these methods did not achieve the desired results on one of the most famous datasets in the field of breast cancer recurrence's prediction (i.e., Wisconsin Prognosis Breast Cancer (WPBC) dataset). The highest accuracy achieved using the previous methods is 89.89%. Therefore, this paper provides a framework for improving the prediction of breast cancer recurrence. The proposed framework has the ability to overcome many of the challenges in the existing dataset such as: (a) the problem of imbalance between classes using a data over-sampling technique; and (b) the large number of data dimensions using Principal Component Analysis (PCA), and a wrapper dimensionality reduction technique based on Genetic Algorithm (GA). It also uses the neural network algorithm to fuse the results of two individual classifiers (i.e., Random Forest (RF) and Support Vector Machine (SVM)). Our proposed framework evaluation showed a significant improvement in the predication performance. It achieved an accuracy of 98.3%, area under the curve of 99%, and precision, recall, and f1-measure of 98%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.