9 results
Search Results
2. A Pre-Silicon Detection Based on Deep Learning Model for Hardware Trojans.
- Author
-
Ma, Pengcheng, Wang, Zhen, and Wang, Yong
- Subjects
- *
DEEP learning , *CONVOLUTIONAL neural networks , *NATURAL language processing , *ARTIFICIAL neural networks - Abstract
Several hardware Trojan (HT) detection techniques are available today to ensure the security of hardware systems. However, the existing pre-silicon HT detection techniques have problems such as difficulties in capturing HT path features and poor applicability. To address these challenges, this paper proposes a gate-level HT detection scheme based on a deep learning model. We parse the circuit gate-level netlist and develop an algorithm to extract circuit path sentences based on the signal propagation rule. Path sentences consisting of gate names are extracted as experimental datasets. We apply the theory of natural language processing (NLP) to the task of HT detection and use three neural networks to filter the length of path sentences. Then, based on the deep learning model text convolutional neural network (TextCNN), we propose PS-TextCNN for HT detection. Our approach is verified on seven benchmark circuits of the RS232-series and eight benchmark circuits of the s-series. We achieve an average true positive rate (TPR) of 88.9%. The TPR of the RS232-series reaches a high score of 99.5%. The TPR of the s-series is 79.5%, which is significantly higher than that of the existing gate-level HT detection techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Hybrid Deep Learning Model Based on Sparse Recurrent Architecture.
- Author
-
Wu, Yutao and Liu, Min
- Subjects
- *
ARTIFICIAL neural networks , *NATURAL language processing , *DEEP learning , *IMAGE recognition (Computer vision) , *TRANSFORMER models , *NETWORK performance - Abstract
Deep neural network has made surprising achievements in natural language processing, image pattern classification recognition, and other domains in the last few years. It is still tough to apply to hardware-constrained or mobile equipment because of the huge number of parameters, high storage as well as computing costs. In this paper, a new sparse iteration neural network architecture is proposed. First, the pruning method is used to compress the model size and make the network sparse. Then the architecture is iterated on the sparse network model, and the network performance is improved without adding additional parameters. Finally, the hybrid deep learning model was carried out on CV tasks and NLP tasks on ANN, CNN, and Transformer. Compared with the sparse network architecture, we finally found that the accuracy of the MINST, CIFAR10, PASCAL VOC 2012, and SQuAD datasets is improved by 0.47%, 0.64%, 3.75%, and 15.06%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Application of Attention Mechanism with Prior Information in Natural Language Processing.
- Author
-
Zhang, Lingling, Zhou, Zhenxiong, Ji, Pengyu, and Mei, Aoxue
- Subjects
DEEP learning ,NATURAL language processing ,ARTIFICIAL neural networks ,RECURRENT neural networks ,MACHINE translating - Abstract
When using deep learning methods to model natural language, a recurrent neural network that can map input sequences to output sequences is usually used. Considering that natural language contains more complicated syntactic structures, and the performance of cyclic neural networks in long sentence processing will decrease, scholars have introduced an attention mechanism into the model, which has improved the above problems to a certain extent. The existing attention mechanism still has some shortcomings, such as the inability to explicitly obtain the known syntactic structure information in the sentence, and the poor interpretability of the output probability. In response to the above problems, this article will improve the attention mechanism in the recurrent neural network model. Firstly, the prior information in the natural language sequence is constructed as a graph model through syntactic analysis and other means, and then the graph structure regularization term is introduced into the sparse mapping. A new function netmax is constructed to replace the softmax function in the traditional attention mechanism, thereby improving the performance of the model and making the degree of association. The input values corresponding to larger input samples are closer, making the output of the attention mechanism easier to understand. The innovation of this paper mainly lies in that the weight calculation method which can be widely used in the attention mechanism is proposed by combining the deep learning model with statistical knowledge, which opens a channel to introduce the prior information for the deep learning model in natural language processing tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Research on Mongolian-Chinese machine translation based on the end-to-end neural network.
- Author
-
Qing-Dao-Er-Ji, Ren, Su, Yila, and Wu, Nier
- Subjects
MACHINE translating ,RECURRENT neural networks ,NATURAL language processing ,ARTIFICIAL neural networks ,RANDOM fields ,CHINESE language ,MONGOLIAN language - Abstract
With the development of natural language processing and neural machine translation, the neural machine translation method of end-to-end (E2E) neural network model has gradually become the focus of research because of its high translation accuracy and strong semantics of translation. However, there are still problems such as limited vocabulary and low translation loyalty, etc. In this paper, the discriminant method and the Conditional Random Field (CRF) model were used to segment and label the stem and affixes of Mongolian in the preprocessing stage of Mongolian-Chinese bilingual corpus. Aiming at the low translation loyalty problem, a decoding model combining Convolution Neural Network (CNN) and Gated Recurrent Unit (GRU) was constructed. The target language decoding was performed by using the GRU. A global attention model was used to obtain the bilingual word alignment information in the process of bilingual word alignment processing. Finally, the quality of the translation was evaluated by Bilingual Evaluation Understudy (BLEU) values and Perplexity (PPL) values. The improved model yields a BLEU value of 25.13 and a PPL value of − 3 8. 1. The experimental results show that the E2E Mongolian-Chinese neural machine translation model was improved in terms of translation quality and semantic confusion compared with traditional statistical methods and machine translation models based on Recurrent Neural Networks (RNN). [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Relational Reasoning Using Neural Networks: A Survey.
- Author
-
Pise, Anil Audumbar, Vadapalli, Hima, and Sanders, Ian
- Subjects
- *
AUTOMATIC speech recognition , *DEEP learning , *NATURAL language processing , *RECURRENT neural networks , *EMOTION recognition , *IMAGE analysis , *TEXT recognition , *ARTIFICIAL neural networks - Abstract
Relational Networks (RN), as one of the most widely used relational reasoning techniques, have achieved great success in many applications such as action and image analysis, speech recognition and text understanding. The use of relational reasoning via RN in neural networks has often been used in recent years. In these instances, RN is composed of various deep learning-based algorithms in simple plug-and-play modules. This is quite advantageous since it circumvents the need for features engineering. This paper surveys the emerging research of deep learning models that make use of RN in tasks such as Natural Language Processing (NLP), Action Recognition, Temporal Relational Reasoning as well as Facial Emotion Recognition (FER). Since, RNs are easy to integrate they have been used in various tasks such as NLP, Recurrent Neural Networks (RNN), Action Recognition, Image Analysis, Object Detection, Temporal Relational Reasoning, as well as for FER. This is due to the fact that RNs use bidirectional LSTM and CNN to solve relational reasoning problems at character and word level. In this paper a comparative review of all relational reasoning-based RN models using deep learning techniques is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Speech-Act Classification Using Convolutional Neural Network and Word Embedding.
- Author
-
Bae, Kyoungman and Ko, Youngjoong
- Subjects
SPEECH acts (Linguistics) ,ARTIFICIAL neural networks ,WORD recognition ,NATURAL language processing ,DEEP learning - Abstract
The application of deep learning techniques in natural language processing tasks has been increased in recent years. Many studies have used the deep learning techniques to obtain a distributed representation of features. In particular, the convolutional neural network (CNN) with the distributed representation have subsequently been shown to be effective for the natural language processing tasks. This paper presents how to apply the CNN to speech-act classification. Then we analyze the experimental results on two issues, how to solve two problems about sparse speech-acts in train data and out of vocabulary, and how to utilize the advantages of CNN in the speech-act classification. As a result, we obtain the significant improved performances when CNN is applied to the speech-act classification. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. SENTENCE ALIGNMENT USING FEED FORWARD NEURAL NETWORK.
- Author
-
FATTAH, MOHAMED ABDEL, REN, FUJI, and KUROIWA, SHINGO
- Subjects
NATURAL language processing ,ARTIFICIAL intelligence ,INFORMATION retrieval ,ARTIFICIAL neural networks ,ELECTRONIC data processing - Abstract
Parallel corpora have become an essential resource for work in multi lingual natural language processing. However, sentence aligned parallel corpora are more efficient than non-aligned parallel corpora for cross language information retrieval and machine translation applications. In this paper, we present a new approach to align sentences in bilingual parallel corpora based on feed forward neural network classifier. A feature parameter vector is extracted from the text pair under consideration. This vector contains text features such as length, punctuate score, and cognate score values. A set of manually prepared training data has been assigned to train the feed forward neural network. Another set of data was used for testing. Using this new approach, we could achieve an error reduction of 60% over length based approach when applied on English–Arabic parallel documents. Moreover this new approach is valid for any language pair and it is quite flexible approach since the feature parameter vector may contain more/less or different features than that we used in our system such as lexical match feature. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
9. Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.
- Author
-
Zamora-Martinez, Francisco and Castro-Bleda, Maria Jose
- Subjects
ARTIFICIAL neural networks ,NATURAL language processing ,ARTIFICIAL intelligence ,MACHINE translating ,MATHEMATICAL models - Abstract
Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on n -best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and N -gram-based systems, showing that the integrated approach seems more promising for N -gram-based systems, even with nonfull-quality NNLMs. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.