785 results
Search Results
2. A survey on few-shot class-incremental learning.
- Author
-
Tian, Songsong, Li, Lusi, Li, Weijun, Ran, Hang, Ning, Xin, and Tiwari, Prayag
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *MACHINE learning , *IMAGE recognition (Computer vision) , *NATURAL language processing , *OBJECT recognition (Computer vision) - Abstract
Large deep learning models are impressive, but they struggle when real-time data is not available. Few-shot class-incremental learning (FSCIL) poses a significant challenge for deep neural networks to learn new tasks from just a few labeled samples without forgetting the previously learned ones. This setup can easily leads to catastrophic forgetting and overfitting problems, severely affecting model performance. Studying FSCIL helps overcome deep learning model limitations on data volume and acquisition time, while improving practicality and adaptability of machine learning models. This paper provides a comprehensive survey on FSCIL. Unlike previous surveys, we aim to synthesize few-shot learning and incremental learning, focusing on introducing FSCIL from two perspectives, while reviewing over 30 theoretical research studies and more than 20 applied research studies. From the theoretical perspective, we provide a novel categorization approach that divides the field into five subcategories, including traditional machine learning methods, meta learning-based methods, feature and feature space-based methods, replay-based methods, and dynamic network structure-based methods. We also evaluate the performance of recent theoretical research on benchmark datasets of FSCIL. From the application perspective, FSCIL has achieved impressive achievements in various fields of computer vision such as image classification, object detection, and image segmentation, as well as in natural language processing and graph. We summarize the important applications. Finally, we point out potential future research directions, including applications, problem setups, and theory development. Overall, this paper offers a comprehensive analysis of the latest advances in FSCIL from a methodological, performance, and application perspective. • In-depth survey on FSCIL methods, applications, performance. • Categorizing FSCIL into five subcategories for clear analysis. • Evaluating FSCIL research on benchmarks for strengths, weaknesses. • FSCIL applications in computer vision, NLP, and graph. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Reduced-complexity Convolutional Neural Network in the compressed domain.
- Author
-
Abdellatef, Hamdan and Karam, Lina J.
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *IMAGE compression , *COMPUTER vision , *COMPUTATIONAL complexity , *COMPUTER performance - Abstract
Deep neural networks have achieved outstanding performance in computer vision tasks. Convolutional Neural Networks (CNNs) typically operate in the spatial domain with raw images, but in practice, images are usually stored and transmitted in their compressed representation where JPEG is one of the most widely used encoder. Also, these networks are computationally intensive and slow. This paper proposes performing the learning and inference processes in the compressed domain in order to reduce the computational complexity and improve the speed of popular CNNs. For this purpose, a novel graph-based frequency channel selection method is proposed to identify and select the most important frequency channels. The computational complexity is reduced by retaining the important frequency components and discarding the insignificant ones as well as eliminating the unnecessary layers of the network. Experimental results show that the modified ResNet-50 operating in the compressed domain is up to 70% faster than the spatial-based traditional ResNet-50 while resulting in similar classification accuracy. Moreover, this paper proposes a preprocessing step with partial encoding to improve the resilience to distortions caused by low-quality encoded images. Finally, we show that training a network with highly compressed data can achieve a good classification accuracy with up to 93% reduction in the storage requirements of the training data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Online continual learning with declarative memory.
- Author
-
Xiao, Zhe, Du, Zhekai, Wang, Ruijin, Gan, Ruimeng, and Li, Jingjing
- Subjects
- *
EXPLICIT memory , *ARTIFICIAL neural networks , *ONLINE education , *LONG-term memory , *MEMORY , *RIGHT to be forgotten , *DEEP learning - Abstract
Deep neural networks are enjoying unprecedented attention and success in recent years. However, catastrophic forgetting undermines the performance of deep models when the training data are arrived sequentially in an online multi-task learning fashion. To address this issue, we propose a novel method named continual learning with declarative memory (CLDM) in this paper. Specifically, our idea is inspired by the structure of human memory. Declarative memory is a major component of long-term memory which helps human beings memorize past experiences and facts. In this paper, we propose to formulate declarative memory as task memory and instance memory in neural networks to overcome catastrophic forgetting. Intuitively, the instance memory recalls the input–output relations (fact) in previous tasks, which is implemented by jointly rehearsing previous samples and learning current tasks as replaying-based methods act. In addition, the task memory aims to capture long-term task correlation information across task sequences to regularize the learning of the current task, thus preserving task-specific weight realizations (experience) in high task-specific layers. In this work, we implement a concrete instantiation of the proposed task memory by leveraging a recurrent unit. Extensive experiments on seven continual learning benchmarks verify that our proposed method is able to outperform previous approaches with tremendous improvements by retaining the information of both samples and tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. A general framework for robust stability analysis of neural networks with discrete time delays.
- Author
-
Solak, Melike, Faydasicok, Ozlem, and Arik, Sabri
- Subjects
- *
ROBUST stability analysis , *HOPFIELD networks , *ARTIFICIAL neural networks , *LYAPUNOV stability , *STABILITY criterion , *STABILITY theory - Abstract
Robust stability of different types of dynamical neural network models including time delay parameters have been extensively studied, and many different sets of sufficient conditions ensuring robust stability of these types of dynamical neural network models have been presented in past decades. In conducting stability analysis of dynamical neural systems, some basic properties of the employed activation functions and the forms of delay terms included in the mathematical representations of dynamical neural networks are of crucial importance in obtaining global stability criteria for dynamical neural systems. Therefore, this research article will examine a class of neural networks expressed by a mathematical model that involves the discrete time delay terms, the Lipschitz activation functions and possesses the intervalized parameter uncertainties. This paper will first present a new and alternative upper bound value of the second norm of the class of interval matrices, which will have an important impact on obtaining the desired results for establishing robust stability of these neural network models. Then, by exploiting wellknown Homeomorphism mapping theory and basic Lyapunov stability theory, we will state a new general framework for determining some novel robust stability conditions for dynamical neural networks possessing discrete time delay terms. This paper will also make a comprehensive review of some previously published robust stability results and show that the existing robust stability results can be easily derived from the results given in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Fixed-time and prescribed-time synchronization of quaternion-valued neural networks: A control strategy involving Lyapunov functions.
- Author
-
Peng, Tao, Wu, Yanqiu, Tu, Zhengwen, Alofi, A.S., and Lu, Jianquan
- Subjects
- *
ARTIFICIAL neural networks , *NEURAL circuitry , *LYAPUNOV functions , *SYNCHRONIZATION - Abstract
A control strategy containing Lyapunov functions is proposed in this paper. Based on this strategy, the fixed-time synchronization of a time-delay quaternion-valued neural network (QVNN) is analyzed. This strategy is extended to the prescribed-time synchronization of the QVNN. Furthermore, an improved two-step switching control strategy is also proposed based on this flexible control strategy. Compared with some existing methods, the main method of this paper is a non-decomposition one, does not contain a sign function in the controller, and has better synchronization accuracy. Two numerical examples verify the above advantages. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Graph Spring Network and Informative Anchor Selection for session-based recommendation.
- Author
-
Zhang, Zizhuo and Wang, Bang
- Subjects
- *
SPACE , *NEIGHBORHOODS , *ENTROPY , *ELECTIONS , *ARTIFICIAL neural networks - Abstract
Session-based recommendation (SBR) aims at predicting the next item for an ongoing anonymous session. The major challenge of SBR is how to capture richer relations in between items and learn ID-based item embeddings to capture such relations. Recent studies propose to first construct an item graph from sessions and employ a Graph Neural Network (GNN) to encode item embedding from the graph. Although such graph-based approaches have achieved performance improvements, their GNNs are not suitable for ID-based embedding learning for the SBR task. In this paper, we argue that the objective of such ID-based embedding learning is to capture a kind of neighborhood affinity in that the embedding of a node is similar to that of its neighbors' in the embedding space. We propose a new graph neural network, called Graph Spring Network (GSN), for learning ID-based item embedding on an item graph to optimize neighborhood affinity in the embedding space. Furthermore, we argue that even stacking multiple GNN layers may not be enough to encode potential relations for two item nodes far-apart in a graph. In this paper, we propose a strategy that first selects some informative item anchors and then encode items' potential relations to such anchors. In summary, we propose a GSN-IAS model (G raph S pring N etwork and I nformative A nchor S election) for the SBR task. We first construct an item graph to describe items' co-occurrences in all sessions. We design the GSN for ID-based item embedding learning and propose an item entropy measure to select informative anchors. We then design an unsupervised learning mechanism to encode items' relations to anchors. We next employ a shared gated recurrent unit (GRU) network to learn two session representations and make two next item predictions. Finally, we design an adaptive decision fusion strategy to fuse two predictions to make the final recommendation. Extensive experiments on three public datasets demonstrate the superiority of our GSN-IAS model over the state-of-the-art models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Meta-structure-based graph attention networks.
- Author
-
Li, Jin, Sun, Qingyu, Zhang, Feng, and Yang, Beining
- Subjects
- *
SUPERVISED learning , *ARTIFICIAL neural networks , *METAHEURISTIC algorithms - Abstract
Due to the ubiquity of graph-structured data, Graph Neural Network (GNN) have been widely used in different tasks and domains and good results have been achieved in tasks such as node classification and link prediction. However, there are still many challenges in representation learning of heterogeneous networks. Existing graph neural network models are partly based on homogeneous graphs, which do not take into account the rich semantic information of nodes and edges due to their different types; And partly based on heterogeneous graphs, which require predefined meta-structures (include meta-paths and meta-graphs) and do not take into account the different effects of different meta-structures on node representation. In this paper, we propose the MS-GAN model, which consists of four parts: graph structure learner, graph structure expander, graph structure filter and graph structure parser. The graph structure learner automatically generates a graph structure consisting of useful meta-paths by selecting and combining the sub-adjacent matrices in the original graph using a 1 × 1 convolution. The graph structure expander further generates a graph structure containing meta-graphs by Hadamard product based on the previous step. The graph structure filterer filters out graph structures that are more effective for downstream classification tasks based on diversity. The graph structure parser assigns different weights to graph structures consisting of different meta-structures by a semantic hierarchical attention. Finally, through experiments on four datasets and meta-structure visualization analysis, it is shown that MS-GAN can automatically generate useful meta-structures and assign different weights to different meta-structures. • Semi-supervised learning based graph neural network. • Automatically generate meta-structures. • Assign weights to meta-structures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
- Author
-
Mu, Ronghui, Marcolino, Leandro, Ni, Qiang, and Ruan, Wenjie
- Subjects
- *
ARTIFICIAL neural networks , *VIDEOS , *STEREO vision (Computer science) , *PRESSURE gages - Abstract
Recent years have witnessed increasing interest in adversarial attacks on images, while adversarial video attacks have seldom been explored. In this paper, we propose a sparse adversarial attack strategy on videos (DeepSAVA). Our model aims to add a small human-imperceptible perturbation to the key frame of the input video to fool the classifiers. To carry out an effective attack that mirrors real-world scenarios, our algorithm integrates spatial transformation perturbations into the frame. Instead of using the l p norm to gauge the disparity between the perturbed frame and the original frame, we employ the structural similarity index (SSIM), which has been established as a more suitable metric for quantifying image alterations resulting from spatial perturbations. We employ a unified optimisation framework to combine spatial transformation with additive perturbation, thereby attaining a more potent attack. We design an effective and novel optimisation scheme that alternatively utilises Bayesian Optimisation (BO) to identify the most critical frame in a video and stochastic gradient descent (SGD) based optimisation to produce both additive and spatial-transformed perturbations. Doing so enables DeepSAVA to perform a very sparse attack on videos for maintaining human imperceptibility while still achieving state-of-the-art performance in terms of both attack success rate and adversarial transferability. Furthermore, built upon the strong perturbations produced by DeepSAVA, we design a novel adversarial training framework to improve the robustness of video classification models. Our intensive experiments on various types of deep neural networks and video datasets confirm the superiority of DeepSAVA in terms of attacking performance and efficiency. When compared to the baseline techniques, DeepSAVA exhibits the highest level of performance in generating adversarial videos for three distinct video classifiers. Remarkably, it achieves an impressive fooling rate ranging from 99.5% to 100% for the I3D model, with the perturbation of just a single frame. Additionally, DeepSAVA demonstrates favourable transferability across various time series models. The proposed adversarial training strategy is also empirically demonstrated with better performance on training robust video classifiers compared with the state-of-the-art adversarial training with projected gradient descent (PGD) adversary. • Sparse attacks on video models: perturb fewer frames to gain high fooling rate. • Combining additive and spatial perturbations to enhance attacking performance. • Using SSIM instead of l p -norm to maintain the human perception. • Applying Bayesian Optimisation to identify the most critical frame to perturb. • A new adversarial training method based on combination of diverse perturbations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Boundary uncertainty aware network for automated polyp segmentation.
- Author
-
Yue, Guanghui, Zhuo, Guibin, Yan, Weiqing, Zhou, Tianwei, Tang, Chang, Yang, Peng, and Wang, Tianfu
- Subjects
- *
ARTIFICIAL neural networks , *TRANSFORMER models , *COLON polyps , *POLYPS , *ADENOMATOUS polyps , *INSPECTION & review - Abstract
Recently, leveraging deep neural networks for automated colorectal polyp segmentation has emerged as a hot topic due to the favored advantages in evading the limitations of visual inspection, e.g., overwork and subjectivity. However, most existing methods do not pay enough attention to the uncertain areas of colonoscopy images and often provide unsatisfactory segmentation performance. In this paper, we propose a novel boundary uncertainty aware network (BUNet) for precise and robust colorectal polyp segmentation. Specifically, considering that polyps vary greatly in size and shape, we first adopt a pyramid vision transformer encoder to learn multi-scale feature representations. Then, a simple yet effective boundary exploration module (BEM) is proposed to explore boundary cues from the low-level features. To make the network focus on the ambiguous area where the prediction score is biased to neither the foreground nor the background, we further introduce a boundary uncertainty aware module (BUM) that explores error-prone regions from the high-level features with the assistance of boundary cues provided by the BEM. Through the top-down hybrid deep supervision, our BUNet implements coarse-to-fine polyp segmentation and finally localizes polyp regions precisely. Extensive experiments on five public datasets show that BUNet is superior to thirteen competing methods in terms of both effectiveness and generalization ability. • A boundary uncertainty aware network is proposed for accurate polyp segmentation. • A boundary exploration module is proposed to explore boundary cues of polyps. • A boundary uncertainty aware module is proposed to seek error-prone regions. • The proposed network achieves high performance on six public datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Star algorithm for neural network ensembling.
- Author
-
Zinchenko, Sergey and Lishudi, Dmitrii
- Subjects
- *
ARTIFICIAL neural networks , *ALGORITHMS , *CLASSIFICATION algorithms , *HUMAN fingerprints - Abstract
Neural network ensembling is a common and robust way to increase model efficiency. In this paper, we propose a new neural network ensemble algorithm based on Audibert's empirical star algorithm. We provide optimal theoretical minimax bound on the excess squared risk. Additionally, we empirically study this algorithm on regression and classification tasks and compare it to most popular ensembling methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks.
- Author
-
Sun, Hao, Shen, Li, Zhong, Qihuang, Ding, Liang, Chen, Shixiang, Sun, Jingwei, Li, Jing, Sun, Guangzhong, and Tao, Dacheng
- Subjects
- *
ARTIFICIAL neural networks , *DEEP learning - Abstract
Sharpness aware minimization (SAM) optimizer has been extensively explored as it can generalize better for training deep neural networks via introducing extra perturbation steps to flatten the landscape of deep learning models. Integrating SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, has already been explored empirically to train large-scale deep neural networks without theoretical guarantee due to the triple difficulties in analyzing the coupled perturbation step, adaptive learning rate and momentum step. In this paper, we try to analyze the convergence rate of AdaSAM in the stochastic non-convex setting. We theoretically show that AdaSAM admits a O (1 / b T) convergence rate, which achieves linear speedup property with respect to mini-batch size b. Specifically, to decouple the stochastic gradient steps with the adaptive learning rate and perturbed gradient, we introduce the delayed second-order momentum term to decompose them to make them independent while taking an expectation during the analysis. Then we bound them by showing the adaptive learning rate has a limited range, which makes our analysis feasible. To the best of our knowledge, we are the first to provide the non-trivial convergence rate of SAM with an adaptive learning rate and momentum acceleration. At last, we conduct several experiments on several NLP tasks and the synthetic task, which show that AdaSAM could achieve superior performance compared with SGD, AMSGrad, and SAM optimizers. • Adaptive SAM with momentum acceleration ensures automatic learning rate adjustment. • Adaptive SAM shows linear speedup, converging at O (1 / b T). • Delaying 2nd-order momentum separates grad from adaptive rate & perturbed gradients. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Low-variance Forward Gradients using Direct Feedback Alignment and momentum.
- Author
-
Bacho, Florian and Chu, Dominique
- Subjects
- *
ARTIFICIAL neural networks , *SUPERVISED learning , *MACHINE learning , *PSYCHOLOGICAL feedback , *AUTOMATIC differentiation , *ONLINE algorithms , *DEEP learning - Abstract
Supervised learning in deep neural networks is commonly performed using error backpropagation. However, the sequential propagation of errors during the backward pass limits its scalability and applicability to low-powered neuromorphic hardware. Therefore, there is growing interest in finding local alternatives to backpropagation. Recently proposed methods based on forward-mode automatic differentiation suffer from high variance in large deep neural networks, which affects convergence. In this paper, we propose the Forward Direct Feedback Alignment algorithm that combines Activity-Perturbed Forward Gradients with Direct Feedback Alignment and momentum. We provide both theoretical proofs and empirical evidence that our proposed method achieves lower variance than forward gradient techniques. In this way, our approach enables faster convergence and better performance when compared to other local alternatives to backpropagation and opens a new perspective for the development of online learning algorithms compatible with neuromorphic systems. • Forward Gradient methods suffer from high variance that hinders convergence. • Activity-Perturbed Forward Gradients can be used to learn derivatives as direct feedback connections. • Feedback learning acts as a momentum that reduces the gradient variance closer to backpropagation. • Feedback learning reduces the biasness of Direct Feedback Alignment. • Feedback learning enables learning in convolutional layers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Almost periodic quasi-projective synchronization of delayed fractional-order quaternion-valued neural networks.
- Author
-
Meng, Xiaofang, Li, Zhouhong, and Cao, Jinde
- Subjects
- *
ARTIFICIAL neural networks , *SYNCHRONIZATION , *FRACTIONAL calculus , *NEURAL circuitry , *CHARACTERISTIC functions , *METRIC spaces - Abstract
This paper examines the issue of almost periodic quasi-projective synchronization of delayed fractional-order quaternion-valued neural networks. First, using a direct method rather than decomposing the fractional quaternion-valued system into four equivalent fractional real-valued systems, using Banach's fixed point theorem, according to the basic properties of fractional calculus and some inequality methods, we obtain that there is a unique almost periodic solution for this class of neural network with some sufficient conditions. Next, by constructing a suitable Lyapunov functional, using the characteristic of the Mittag-Leffler function and the scaling idea of the inequality, the adequate conditions for the quasi-projective synchronization of the established model are derived, and the upper bound of the systematic error is estimated. Finally, further use Matlab is used to carry out two numerical simulations to prove the results of theoretical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Saturation function-based continuous control on fixed-time synchronization of competitive neural networks.
- Author
-
Zheng, Caicai, Hu, Cheng, Yu, Juan, and Wen, Shiping
- Subjects
- *
NEURAL circuitry , *SYNCHRONIZATION , *ARTIFICIAL neural networks , *IMAGE encryption , *LONG-term memory , *SHORT-term memory - Abstract
Currently, through proposing discontinuous control strategies with the signum function and discussing separately short-term memory (STM) and long-term memory (LTM) of competitive artificial neural networks (ANNs), the fixed-time (FXT) synchronization of competitive ANNs has been explored. Note that the method of separate analysis usually leads to complicated theoretical derivation and synchronization conditions, and the signum function inevitably causes the chattering to reduce the performance of the control schemes. To try to solve these challenging problems, the FXT synchronization issue is concerned in this paper for competitive ANNs by establishing a theorem of FXT stability with switching type and developing continuous control schemes based on a kind of saturation functions. Firstly, different from the traditional method of studying separately STM and LTM of competitive ANNs, the models of STM and LTM are compressed into a high-dimensional system so as to reduce the complexity of theoretical analysis. Additionally, as an important theoretical preliminary, a FXT stability theorem with switching differential conditions is established and some high-precision estimates for the convergence time are explicitly presented by means of several special functions. To achieve FXT synchronization of the addressed competitive ANNs, a type of continuous pure power-law control scheme is developed via introducing the saturation function instead of the signum function, and some synchronization criteria are further derived by the established FXT stability theorem. These theoretical results are further illustrated lastly via a numerical example and are applied to image encryption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Understanding neural network through neuron level visualization.
- Author
-
Dou, Hui, Shen, Furao, Zhao, Jian, and Mu, Xinyu
- Subjects
- *
MACHINE learning , *ARTIFICIAL neural networks , *IMAGE recognition (Computer vision) , *CONVOLUTIONAL neural networks , *DATA visualization - Abstract
Neurons are the fundamental units of neural networks. In this paper, we propose a method for explaining neural networks by visualizing the learning process of neurons. For a trained neural network, the proposed method obtains the features learned by each neuron and displays the features in a human-understandable form. The features learned by different neurons are combined to analyze the working mechanism of different neural network models. The method is applicable to neural networks without requiring any changes to the architectures of the models. In this study, we apply the proposed method to both Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs) trained using the backpropagation learning algorithm. We conduct experiments on models for image classification tasks to demonstrate the effectiveness of the method. Through these experiments, we gain insights into the working mechanisms of various neural network architectures and evaluate neural network interpretability from diverse perspectives. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Lag projective synchronization of discrete-time fractional-order quaternion-valued neural networks with time delays.
- Author
-
He, Yan, Zhang, Weiwei, Zhang, Hai, Chen, Dingyuan, and Cao, Jinde
- Subjects
- *
ARTIFICIAL neural networks , *TIME delay systems , *SYNCHRONIZATION , *COMPUTER simulation - Abstract
This paper deals with the lag projective synchronization (LPS) problem for a class of discrete-time fractional-order quaternion-valued neural networks(DTFO QVNNs) systems with time delays. Firstly, a DTFOQVNNs system with time delay is constructed. Secondly, linear and adaptive feedback controllers with sign function are designed respectively. Furthermore, through Lyapunov direct method, DTFO inequality technique and Razumikhin theorem, some sufficiency criteria are obtained to ensure that the system in this article can achieve LPS. At last, the significance of the theoretical part of this paper is verified through numerical simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Periodicity and multi-periodicity generated by impulses control in delayed Cohen–Grossberg-type neural networks with discontinuous activations.
- Author
-
Cai, Zuowei, Huang, Lihong, Wang, Zengyun, Pan, Xianmin, and Liu, Shukun
- Subjects
- *
DIFFERENTIAL inclusions , *FIXED point theory , *SET-valued maps , *ARTIFICIAL neural networks , *ELECTRIC circuit networks , *NEURAL circuitry - Abstract
This paper discusses the periodicity and multi-periodicity in delayed Cohen–Grossberg-type neural networks (CGNNs) possessing impulsive effects, whose activation functions possess discontinuities and are allowed to be unbounded or nonmonotonic. Based on differential inclusion and cone expansion–compression fixed-point theory of set-valued mapping, several improved criteria are given to derive the positive solution with ω -periodicity and ω -multi-periodicity for delayed CGNNs under impulsive control. These ω -periodicity/ ω -multi-periodicity orbits are produced by impulses control. The analytical method and theoretical results presented in this paper are of certain significance to the design of neural network models or circuits possessing discontinuous neuron activation and impulsive effects in periodic environment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. A distributed optimisation framework combining natural gradient with Hessian-free for discriminative sequence training.
- Author
-
Haider, Adnan, Zhang, Chao, Kreyssig, Florian L., and Woodland, Philip C.
- Subjects
- *
AUTOMATIC speech recognition , *HIDDEN Markov models , *ARTIFICIAL neural networks , *RECURRENT neural networks , *ALGORITHMS , *MAGNITUDE (Mathematics) , *CHANNEL estimation - Abstract
This paper presents a novel natural gradient and Hessian-free (NGHF) optimisation framework for neural network training that can operate efficiently in a distributed manner. It relies on the linear conjugate gradient (CG) algorithm to combine the natural gradient (NG) method with local curvature information from Hessian-free (HF). A solution to a numerical issue in CG allows effective parameter updates to be generated with far fewer CG iterations than usually used (e.g. 5-8 instead of 200). This work also presents a novel preconditioning approach to improve the progress made by individual CG iterations for models with shared parameters. Although applicable to other training losses and model structures, NGHF is investigated in this paper for lattice-based discriminative sequence training for hybrid hidden Markov model acoustic models using a standard recurrent neural network, long short-term memory, and time delay neural network models for output probability calculation. Automatic speech recognition experiments are reported on the multi-genre broadcast data set for a range of different acoustic model types. These experiments show that NGHF achieves larger word error rate reductions than standard stochastic gradient descent or Adam, while requiring orders of magnitude fewer parameter updates. • Large batch optimisation: NGHF. Combines natural gradient (NG) & Hessian-free (HF). • Faster convergence of each update estimated via improved conjugate gradient. • Applied NG, HF & NGHF to discriminative sequence training for speech recognition. • NG, HF, and NGHF require orders of magnitude fewer parameter updates than Adam & SGD. • MPE training with NGHF achieves lower word error rates than with NG, HF, Adam & SGD. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Dichotomy value iteration with parallel learning design towards discrete-time zero-sum games.
- Author
-
Wang, Jiangyu, Wang, Ding, Li, Xin, and Qiao, Junfei
- Subjects
- *
ZERO sum games , *REINFORCEMENT learning , *COST functions , *DISCRETE-time systems , *NONLINEAR systems - Abstract
In this paper, a novel parallel learning framework is developed to solve zero-sum games for discrete-time nonlinear systems. Briefly, the purpose of this study is to determine a tentative function according to the prior knowledge of the value iteration (VI) algorithm. The learning process of the parallel controllers can be guided by the tentative function. That is to say, the neighborhood of the optimal cost function can be compressed within a small range via two typical exploration policies. Based on the parallel learning framework, a novel dichotomy VI algorithm is established to accelerate the learning speed. It is shown that the parallel controllers will converge to the optimal policy from contrary initial policies. Finally, two typical systems are used to demonstrate the learning performance of the constructed dichotomy VI algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. PEPNet: A barotropic primitive equations-based network for wind speed prediction.
- Author
-
Ye, Rui, Zhang, Baoquan, Li, Xutao, and Ye, Yunming
- Subjects
- *
WIND speed , *ARTIFICIAL neural networks , *BAROTROPIC equation , *NUMERICAL weather forecasting , *ATMOSPHERIC circulation , *GEOPOTENTIAL height - Abstract
In wind speed prediction technologies, deep learning-based methods have achieved promising advantages. However, most existing methods focus on learning implicit knowledge in a data-driven manner but neglect some explicit knowledge from the physical theory of meteorological dynamics, failing to make stable and long-term predictions. In this paper, we explore introducing explicit physical knowledge into neural networks and propose Physical Equations Predictive Network (PEPNet) for multi-step wind speed predictions. In PEPNet, a new neural block called the Augmented Neural Barotropic Equations (ANBE) block is designed as its key component, which aims to capture the wind dynamics by combining barotropic primitive equations and deep neural networks. Specifically, the ANBE block adopts a two-branch structure to model wind dynamics, where one branch is physic-based and the other is data-driven-based. The physic-based branch constructs temporal partial derivatives of meteorological elements (including u-component wind, v-component wind, and geopotential height) in a new Neural Barotropic Equations Unit (NBEU). The NBEU is developed based on the barotropic primitive equations mode in numerical weather prediction (NWP). Besides, considering that the barotropic primitive mode is a crude assumption of atmospheric motion, another data-driven-based branch is developed in the ANBE block, which aims at capturing meteorological dynamics beyond barotropic primitive equations. Finally, the PEPNet follows a time-variant structure to enhance the model's capability to capture wind dynamics over time. To evaluate the predictive performance of PEPNet, we have conducted several experiments on two real-world datasets. Experimental results show that the proposed method outperforms the state-of-the-art techniques and achieve optimal performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Bidirectionally self-normalizing neural networks.
- Author
-
Lu, Yao, Gould, Stephen, and Ajanthan, Thalaiyasingam
- Subjects
- *
ARTIFICIAL neural networks , *PROBABILITY theory - Abstract
The problem of vanishing and exploding gradients has been a long-standing obstacle that hinders the effective training of neural networks. Despite various tricks and techniques that have been employed to alleviate the problem in practice, there still lacks satisfactory theories or provable solutions. In this paper, we address the problem from the perspective of high-dimensional probability theory. We provide a rigorous result that shows, under mild conditions, how the vanishing/exploding gradients problem disappears with high probability if the neural networks have sufficient width. Our main idea is to constrain both forward and backward signal propagation in a nonlinear neural network through a new class of activation functions, namely Gaussian–Poincaré normalized functions, and orthogonal weight matrices. Experiments on both synthetic and real-world data validate our theory and confirm its effectiveness on very deep neural networks when applied in practice. • The vanishing/exploding gradient problem in training deep neural networks is addressed. • The problem is provably solved under mild conditions using high-dimensional probability theory. • Experiments show neural network of 200 layers can be trained without linearizing the networks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Memristor-based spiking neural network with online reinforcement learning.
- Author
-
Vlasov, Danila, Minnekhanov, Anton, Rybka, Roman, Davydov, Yury, Sboev, Alexander, Serenko, Alexey, Ilyasov, Alexander, and Demin, Vyacheslav
- Subjects
- *
ARTIFICIAL neural networks , *REINFORCEMENT learning , *MACHINE learning , *ONLINE education - Abstract
Neural networks implemented in memristor-based hardware can provide fast and efficient in-memory computation, but traditional learning methods such as error back-propagation are hardly feasible in it. Spiking neural networks (SNNs) are highly promising in this regard, as their weights can be changed locally in a self-organized manner without the demand for high-precision changes calculated with the use of information almost from the entire network. This problem is rather relevant for solving control tasks with neural-network reinforcement learning methods, as those are highly sensitive to any source of stochasticity in a model initialization, training, or decision-making procedure. This paper presents an online reinforcement learning algorithm in which the change of connection weights is carried out after processing each environment state during interaction-with-environment data generation. Another novel feature of the algorithm is that it is applied to SNNs with memristor-based STDP-like learning rules. The plasticity functions are obtained from real memristors based on poly-p-xylylene and CoFeB-LiNbO 3 nanocomposite, which were experimentally assembled and analyzed. The SNN is comprised of leaky integrate-and-fire neurons. Environmental states are encoded by the timings of input spikes, and the control action is decoded by the first spike. The proposed learning algorithm solves the Cart-Pole benchmark task successfully. This result could be the first step towards implementing a real-time agent learning procedure in a continuous-time environment that can be run on neuromorphic systems with memristive synapses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Approximation of smooth functionals using deep ReLU networks.
- Author
-
Song, Linhao, Liu, Ying, Fan, Jun, and Zhou, Ding-Xuan
- Subjects
- *
ARTIFICIAL neural networks , *FUNCTIONALS , *ANALYTIC functions , *FUNCTION spaces , *ANALYTIC spaces , *POLYNOMIAL approximation - Abstract
In recent years, deep neural networks have been employed to approximate nonlinear continuous functionals F defined on L p ( [ − 1 , 1 ] s) for 1 ≤ p ≤ ∞. However, the existing theoretical analysis in the literature either is unsatisfactory due to the poor approximation results, or does not apply to the rectified linear unit (ReLU) activation function. This paper aims to investigate the approximation power of functional deep ReLU networks in two settings: F is continuous with restrictions on the modulus of continuity, and F has higher order Fréchet derivatives. A novel functional network structure is proposed to extract features of higher order smoothness harbored by the target functional F. Quantitative rates of approximation in terms of the depth, width and total number of weights of neural networks are derived for both settings. We give logarithmic rates when measuring the approximation error on the unit ball of a Hölder space. In addition, we establish nearly polynomial rates (i.e., rates of the form exp − a (log M) b with a > 0 , 0 < b < 1) when measuring the approximation error on a space of analytic functions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.
- Author
-
Li, Huanhuan, Yu, Wenbo, and Huang, He
- Subjects
- *
ARTIFICIAL neural networks - Abstract
Deep neural networks are sensitive to adversarial examples and would produce wrong results with high confidence. However, most existing attack methods exhibit weak transferability, especially for adversarially trained models and defense models. In this paper, two methods are proposed to generate highly transferable adversarial examples, namely Adaptive Inertia Iterative Fast Gradient Sign Method (AdaI 2 -FGSM) and Amplitude Spectrum Dropout Method (ASDM). Specifically, AdaI 2 -FGSM aims to integrate adaptive inertia into the gradient-based attack, and leverage the looking ahead property to search for a flatter maximum, which is essential to strengthen the transferability of adversarial examples. By introducing a loss-preserving transformation in the frequency domain, the proposed ASDM with the dropout invariance property can craft the copies of input images to overcome the poor generalization on the surrogate models. Furthermore, AdaI 2 -FGSM and ASDM can be naturally integrated as an efficient gradient-based attack method to yield more transferable adversarial examples. Extensive experimental results on the ImageNet-compatible dataset demonstrate that higher transferability is achieved by our method than some advanced gradient-based attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Preassigned-time projective synchronization of delayed fully quaternion-valued discontinuous neural networks with parameter uncertainties.
- Author
-
Pu, Hao, Li, Fengjun, Wang, Qingyun, and Li, Pengzhen
- Subjects
- *
ARTIFICIAL neural networks , *NEURAL circuitry , *SYNCHRONIZATION , *STABILITY theory - Abstract
This paper concerns with the preassigned-time projective synchronization issue for delayed fully quaternion-valued discontinuous neural networks involving parameter uncertainties through the non-separation method. Above all, based on the existing works, a new preassigned-time stability theorem is established. Subsequently, to realize the control goals, two types of novel and simple chattering-free quaternion controllers are designed, one without the power-law term and the other with a hyperbolic-tangent function. They are different from the existing common power-law controller and exponential controller. Thirdly, under the Filippov discontinuity theories and with the aid of quaternion inequality techniques, some novel succinct sufficient criteria are obtained to ensure the addressed systems to achieve the preassigned-time synchronization by using the preassigned-time stability theory. The preassigned settling time is free from any parameter and any initial value of the system, and can be preset according to the actual task demands. Particularly, unlike the existing results, the proposed control methods can effectively avoid the chattering phenomenon, and the time delay part is removed for simplicity. Additionally, the projection coefficient is generic quaternion-valued instead of real-valued or complex-valued, and some of the previous relevant results are extended. Lastly, numerical simulations are reported to substantiate the effectiveness of the control strategies, the merits of preassigned settling time, and the correctness of the acquired results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. A novel framework of prescribed time/fixed time/finite time stochastic synchronization control of neural networks and its application in image encryption.
- Author
-
Wang, Xin, Cao, Jinde, Zhou, Xianghui, Liu, Ying, Yan, Yaoxi, and Wang, Jiangtao
- Subjects
- *
IMAGE encryption , *ARTIFICIAL neural networks , *NEURAL circuitry , *SYNCHRONIZATION - Abstract
In this paper, we investigate a novel framework for achieving prescribed-time (PAT), fixed-time (FXT) and finite-time (FNT) stochastic synchronization control of semi-Markov switching quaternion-valued neural networks (SMS-QVNNs), where the setting time (ST) of PAT/FXT/FNT stochastic synchronization control is effectively preassigned beforehand and estimated. Different from the existing frameworks of PAT/FXT/FNT control and PAT/FXT control (where PAT control is deeply dependent on FXT control, meaning that if the FXT control task is removed, it is impossible to implement the PAT control task), and different from the existing frameworks of PAT control (where a time-varying control gain such as μ (t) = T / (T − t) with t ∈ [ 0 , T) was employed, leading to an unbounded control gain as t → T − from the initial time to prescribed time T), the investigated framework is only built on a control strategy, which can accomplish its three control tasks (PAT/FXT/FNT control), and the control gains are bounded even though time t tends to the prescribed time T. Four numerical examples and an application of image encryption/decryption are given to illustrate the feasibility of our proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. An unsupervised STDP-based spiking neural network inspired by biologically plausible learning rules and connections.
- Author
-
Dong, Yiting, Zhao, Dongcheng, Li, Yang, and Zeng, Yi
- Subjects
- *
ARTIFICIAL neural networks , *ADAPTIVE filters , *DEEP learning , *CONCEPT learning , *NEUROPLASTICITY , *INTERNEURONS - Abstract
The backpropagation algorithm has promoted the rapid development of deep learning, but it relies on a large amount of labeled data and still has a large gap with how humans learn. The human brain can quickly learn various conceptual knowledge in a self-organized and unsupervised manner, accomplished through coordinating various learning rules and structures in the human brain. Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly. In this paper, taking inspiration from short-term synaptic plasticity, we design an adaptive synaptic filter and introduce the adaptive spiking threshold as the neuron plasticity to enrich the representation ability of SNNs. We also introduce an adaptive lateral inhibitory connection to adjust the spikes balance dynamically to help the network learn richer features. To speed up and stabilize the training of unsupervised spiking neural networks, we design a samples temporal batch STDP (STB-STDP), which updates weights based on multiple samples and moments. By integrating the above three adaptive mechanisms and STB-STDP, our model greatly accelerates the training of unsupervised spiking neural networks and improves the performance of unsupervised SNNs on complex tasks. Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets. Further, we tested on the more complex CIFAR10 dataset, and the results fully illustrate the superiority of our algorithm. Our model is also the first work to apply unsupervised STDP-based SNNs to CIFAR10. At the same time, in the small-sample learning scenario, it will far exceed the supervised ANN using the same structure. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Enhanced covertness class discriminative universal adversarial perturbations.
- Author
-
Gao, Haoran, Zhang, Hua, Zhang, Xin, Li, Wenmin, Wang, Jiahui, and Gao, Fei
- Subjects
- *
ARTIFICIAL neural networks - Abstract
The main aim of class discriminative universal adversarial perturbations (CD-UAPs) is that the adversary can flexibly control the targeted class and influence remaining classes limitedly. CD-UAPs generated by the existing attack strategies suffer from a high fooling ratio of non-targeted source classes under non-targeted and targeted attacks, and face the increasing risk of discovery. In this paper, we propose a training framework for generating enhanced covertness CD-UAPs. It trains the targeted source class set and the non-targeted source classes set alternately to update the perturbation and introduces logit pairing to mitigate the influence of perturbation on the non-targeted source classes set. Further, we extend CD-UAPs on the targeted (one-targeted) attack to the multi-targeted attack, which perturbs a targeted source class to multiple targeted sink classes that seriously threaten the current scenario. It can not only provide the adversary with freedom of precise attack but reduce the risk of being detected. This attack poses a strong threat to security-sensitive applications. Extensive experiments on the CIFAR-10, CIFAR-100 and ImageNet datasets show our method can generate more deceptive perturbations and enhance the covertness of CD-UAPs. For example, our method improves the absolute fooling ratio gaps of ResNet-20 and VGG-16 by 9.46% and 6.94% compared with the baseline method, respectively. We achieve the multi-targeted attack with a high fooling ratio on the GTSRB dataset. The average absolute target fooling ratio gaps of ResNet-20 and VGG-16 are 81.89% and 76.33%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Stable invariant models via Koopman spectra.
- Author
-
Konishi, Takuya and Kawahara, Yoshinobu
- Subjects
- *
ARTIFICIAL neural networks , *INVARIANT sets , *NEURAL development - Abstract
Weight-tied models have attracted attention in the modern development of neural networks. The deep equilibrium model (DEQ) represents infinitely deep neural networks with weight-tying, and recent studies have shown the potential of this type of approach. DEQs are needed to iteratively solve root-finding problems in training and are built on the assumption that the underlying dynamics determined by the models converge to a fixed point. In this paper, we present the stable invariant model (SIM) , a new class of deep models that in principle approximates DEQs under stability and extends the dynamics to more general ones converging to an invariant set (not restricted in a fixed point). The key ingredient in deriving SIMs is a representation of the dynamics with the spectra of the Koopman and Perron–Frobenius operators. This perspective approximately reveals stable dynamics with DEQs and then derives two variants of SIMs. We also propose an implementation of SIMs that can be learned in the same way as feedforward models. We illustrate the empirical performance of SIMs with experiments and demonstrate that SIMs achieve comparative or superior performance against DEQs in several learning tasks. • A Koopman spectral analysis is performed for deep equilibrium models. • The analysis identifies stable dynamics determined by deep equilibrium models. • We develop a class of deep models to approximate the stable dynamics. • Models in the class can represent dynamics converging to an invariant set. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks.
- Author
-
Zhang, Hui, Cheng, Jian, Zhang, Jun, Liu, Hongyi, and Wei, Zhihui
- Subjects
- *
ARTIFICIAL neural networks , *POISSON processes , *SUM of squares , *MNEMONICS , *STOCHASTIC processes - Abstract
Spiking Neural Network (SNN) has been recognized as the third generation of neural networks. Conventionally, a SNN can be converted from a pre-trained Artificial Neural Network (ANN) with less computation and memory than training from scratch. But, these converted SNNs are vulnerable to adversarial attacks. Numerical experiments demonstrate that the SNN trained by optimizing the loss function will be more adversarial robust, but the theoretical analysis for the mechanism of robustness is lacking. In this paper, we provide a theoretical explanation by analyzing the expected risk function. Starting by modeling the stochastic process introduced by the Poisson encoder, we prove that there is a positive semidefinite regularizer. Perhaps surprisingly, this regularizer can make the gradients of the output with respect to input closer to zero, thus resulting in inherent robustness against adversarial attacks. Extensive experiments on the CIFAR10 and CIFAR100 datasets support our point of view. For example, we find that the sum of squares of the gradients of the converted SNNs is 13 ∼ 160 times that of the trained SNNs. And, the smaller the sum of the squares of the gradients, the smaller the degradation of accuracy under adversarial attack. • The adversarial robustness of SNNs based on rate encoder is explained mathematically. • A regularizer distinguishes the directly trained SNN from the converted SNN. • The coding length can also change the gradient of the input to affect the robustness. • We compare more types of the trained SNN with the converted SNN. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Learning emotions latent representation with CVAE for text-driven expressive audiovisual speech synthesis.
- Author
-
Dahmani, Sara, Colotte, Vincent, Girard, Valérian, and Ouni, Slim
- Subjects
- *
SPEECH synthesis , *EMOTIONS , *MANIPULATIVE behavior , *AUTOMATIC speech recognition , *VISUAL training , *DEEP learning , *ARTIFICIAL neural networks - Abstract
Great improvement has been made in the field of expressive audiovisual Text-to-Speech synthesis (EAVTTS) thanks to deep learning techniques. However, generating realistic speech is still an open issue and researchers in this area have been focusing lately on controlling the speech variability. In this paper, we use different neural architectures to synthesize emotional speech. We study the application of unsupervised learning techniques for emotional speech modeling as well as methods for restructuring emotions representation to make it continuous and more flexible. This manipulation of the emotional representation should allow us to generate new styles of speech by mixing emotions. We first present our expressive audiovisual corpus. We validate the emotional content of this corpus with three perceptual experiments using acoustic only, visual only and audiovisual stimuli. After that, we analyze the performance of a fully connected neural network in learning characteristics specific to different emotions for the phone duration aspect and the acoustic and visual modalities. We also study the contribution of a joint and separate training of the acoustic and visual modalities in the quality of the generated synthetic speech. In the second part of this paper, we use a conditional variational auto-encoder (CVAE) architecture to learn a latent representation of emotions. We applied this method in an unsupervised manner to generate features of expressive speech. We used a probabilistic metric to compute the overlapping degree between emotions latent clusters to choose the best parameters for the CVAE. By manipulating the latent vectors, we were able to generate nuances of a given emotion and to generate new emotions that do not exist in our database. For these new emotions, we obtain a coherent articulation. We conducted four perceptual experiments to evaluate our findings. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Statistical foundation of Variational Bayes neural networks.
- Author
-
Bhattacharya, Shrijita and Maiti, Tapabrata
- Subjects
- *
ARTIFICIAL neural networks , *MARKOV chain Monte Carlo - Abstract
Despite the popularism of Bayesian neural networks (BNNs) in recent years, its use is somewhat limited in complex and big data situations due to the computational cost associated with full posterior evaluations. Variational Bayes (VB) provides a useful alternative to circumvent the computational cost and time complexity associated with the generation of samples from the true posterior using Markov Chain Monte Carlo (MCMC) techniques. The efficacy of the VB methods is well established in machine learning literature. However, its potential broader impact is hindered due to a lack of theoretical validity from a statistical perspective. In this paper, we establish the fundamental result of posterior consistency for the mean-field variational posterior (VP) for a feed-forward artificial neural network model. The paper underlines the conditions needed to guarantee that the VP concentrates around Hellinger neighborhoods of the true density function. Additionally, the role of the scale parameter and its influence on the convergence rates has also been discussed. The paper mainly relies on two results (1) the rate at which the true posterior grows (2) the rate at which the Kullback–Leibler (KL) distance between the posterior and variational posterior grows. The theory provides a guideline for building prior distributions for BNNs along with an assessment of accuracy of the corresponding VB implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. LoyalDE: Improving the performance of Graph Neural Networks with loyal node discovery and emphasis.
- Author
-
Wei, Haotong, Zhu, Yinlin, Li, Xunkai, and Jiang, Bin
- Subjects
- *
SIMILARITY (Geometry) , *LOYALTY , *SUPERVISED learning , *ARTIFICIAL neural networks - Abstract
Recent years have witnessed an increasing focus on graph-based semi-supervised learning with Graph Neural Networks (GNNs). Despite existing GNNs having achieved remarkable accuracy, research on the quality of graph supervision information has inadvertently been ignored. In fact, there are significant differences in the quality of supervision information provided by different labeled nodes, and treating supervision information with different qualities equally may lead to sub-optimal performance of GNNs. We refer to this as the graph supervision loyalty problem, which is a new perspective for improving the performance of GNNs. In this paper, we devise FT-Score to quantify node loyalty by considering both the local feature similarity and the local topology similarity, and nodes with higher loyalty are more likely to provide higher-quality supervision. Based on this, we propose LoyalDE (Loyal Node D iscovery and E mphasis), a model-agnostic hot-plugging training strategy, which can discover potential nodes with high loyalty to expand the training set, and then emphasize nodes with high loyalty during model training to improve performance. Experiments demonstrate that the graph supervision loyalty problem will fail most existing GNNs. In contrast, LoyalDE brings about at most 9.1% performance improvement to vanilla GNNs and consistently outperforms several state-of-the-art training strategies for semi-supervised node classification. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning.
- Author
-
Shi, Yanyan, Shi, Dianxi, Qiao, Ziteng, Wang, Zhen, Zhang, Yi, Yang, Shaowu, and Qiu, Chunping
- Subjects
- *
ARTIFICIAL neural networks , *PROTOTYPES , *IMAGE recognition (Computer vision) - Abstract
Deep neural networks (DNNs) are prone to the notorious catastrophic forgetting problem when learning new tasks incrementally. Class-incremental learning (CIL) is a promising solution to tackle the challenge and learn new classes while not forgetting old ones. Existing CIL approaches adopted stored representative exemplars or complex generative models to achieve good performance. However, storing data from previous tasks causes memory or privacy issues, and the training of generative models is unstable and inefficient. This paper proposes a method based on multi-granularity knowledge distillation and prototype consistency regularization (MDPCR) that performs well even when the previous training data is unavailable. First, we propose to design knowledge distillation losses in the deep feature space to constrain the incremental model trained on the new data. Thereby, multi-granularity is captured from three aspects: by distilling multi-scale self-attentive features, the feature similarity probability, and global features to maximize the retention of previous knowledge, effectively alleviating catastrophic forgetting. Conversely, we preserve the prototype of each old class and employ prototype consistency regularization (PCR) to ensure that the old prototypes and semantically enhanced prototypes produce consistent prediction, which excels in enhancing the robustness of old prototypes and reduces the classification bias. Extensive experiments on three CIL benchmark datasets confirm that MDPCR performs significantly better over exemplar-free methods and outperforms typical exemplar-based approaches. • Multi-granularity knowledge distillation is proposed to alleviate catastrophic forgetting for class-incremental classification. • Prototype consistency regularization boosts the performance of CIL significantly and effectively mitigates classification bias. • Our framework performs well even when the previous training data is unavailable. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. An accelerated end-to-end method for solving routing problems.
- Author
-
Zhu, Tianyu, Shi, Xinli, Xu, Xiangping, and Cao, Jinde
- Subjects
- *
DEEP learning , *SUPERVISED learning , *ARTIFICIAL neural networks , *REINFORCEMENT learning , *TRAVELING salesman problem , *PROBLEM solving , *COMBINATORIAL optimization - Abstract
The application of neural network models to solve combinatorial optimization has recently drawn much attention and shown promising results in dealing with similar problems, like Travelling Salesman Problem. The neural network allows to learn solutions based on given problem instances, using reinforcement learning or supervised learning. In this paper, we present a novel end-to-end method to solve routing problems. In specific, we propose a gated cosine-based attention model (GCAM) to train policies, which accelerates the training process and the convergence of policy. Extensive experiments on different scale of routing problems show that the proposed method can achieve faster convergence of the training process than the state-of-the-art deep learning models while achieving solutions of the same quality. • We proposed a cosine-based neural network model based on the transformer architecture to improve node embeddings. • We use cosine decomposability to accelerate training speed of the proposed model. • We use a gating connection to stabilize training and get faster convergence, which leads to better overall performance. • We apply the proposed gated cosine-based attention model to solve several Routing Problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. CSAST: Content self-supervised and style contrastive learning for arbitrary style transfer.
- Author
-
Zhang, Yuqi, Tian, Yingjie, and Hou, Junjie
- Subjects
- *
COGNITIVE styles , *ARTIFICIAL neural networks , *SUPERVISED learning , *ARTISTIC style - Abstract
Arbitrary artistic style transfer has achieved great success with deep neural networks, but it is still difficult for existing methods to tackle the dilemma of content preservation and style translation due to the inherent content-and-style conflict. In this paper, we introduce content self-supervised learning and style contrastive learning to arbitrary style transfer for improved content preservation and style translation, respectively. The former one is based on the assumption that stylization of a geometrically transformed image is perceptually similar to applying the same transformation to the stylized result of the original image. This content self-supervised constraint noticeably improves content consistency before and after style translation, and contributes to reducing noises and artifacts as well. Furthermore, it is especially suitable to video style transfer, due to its ability to promote inter-frame continuity, which is of crucial importance to visual stability of video sequences. For the latter one, we construct a contrastive learning that pull close style representations (Gram matrices) of the same style and push away that of different styles. This brings more accurate style translation and more appealing visual effect. A large number of qualitative and quantitative experiments demonstrate superiority of our method in improving arbitrary style transfer quality, both for images and videos. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. A multi-view co-training network for semi-supervised medical image-based prognostic prediction.
- Author
-
Li, Hailin, Wang, Siwen, Liu, Bo, Fang, Mengjie, Cao, Runnan, He, Bingxi, Liu, Shengyuan, Hu, Chaoen, Dong, Di, Wang, Ximing, Wang, Hexiang, and Tian, Jie
- Subjects
- *
ARTIFICIAL neural networks , *SUPERVISED learning , *PROGNOSTIC models , *SUPPORT vector machines , *TIME perception , *REINFORCEMENT learning - Abstract
Prognostic prediction has long been a hotspot in disease analysis and management, and the development of image-based prognostic prediction models has significant clinical implications for current personalized treatment strategies. The main challenge in prognostic prediction is to model a regression problem based on censored observations, and semi-supervised learning has the potential to play an important role in improving the utilization efficiency of censored data. However, there are yet few effective semi-supervised paradigms to be applied. In this paper, we propose a semi-supervised co-training deep neural network incorporating a support vector regression layer for survival time estimation (Co-DeepSVS) that improves the efficiency in utilizing censored data for prognostic prediction. First, we introduce a support vector regression layer in deep neural networks to deal with censored data and directly predict survival time, and more importantly to calculate the labeling confidence of each case. Then, we apply a semi-supervised multi-view co-training framework to achieve accurate prognostic prediction, where labeling confidence estimation with prior knowledge of pseudo time is conducted for each view. Experimental results demonstrate that the proposed Co-DeepSVS has a promising prognostic ability and surpasses most widely used methods on a multi-phase CT dataset. Besides, the introduction of SVR layer makes the model more robust in the presence of follow-up bias. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Learning defense transformations for counterattacking adversarial examples.
- Author
-
Li, Jincheng, Zhang, Shuhai, Cao, Jiezhang, and Tan, Mingkui
- Subjects
- *
ARTIFICIAL neural networks , *AFFINE transformations - Abstract
Deep neural networks (DNNs) are vulnerable to adversarial examples with small perturbations. Adversarial defense thus has been an important means which improves the robustness of DNNs by defending against adversarial examples. Existing defense methods focus on some specific types of adversarial examples and may fail to defend well in real-world applications. In practice, we may face many types of attacks where the exact type of adversarial examples in real-world applications can be even unknown. In this paper, motivated by that adversarial examples are more likely to appear near the classification boundary and are vulnerable to some transformations, we study adversarial examples from a new perspective that whether we can defend against adversarial examples by pulling them back to the original clean distribution. We empirically verify the existence of defense affine transformations that restore adversarial examples. Relying on this, we learn defense transformations to counterattack the adversarial examples by parameterizing the affine transformations and exploiting the boundary information of DNNs. Extensive experiments on both toy and real-world data sets demonstrate the effectiveness and generalization of our defense method. The code is avaliable at https://github.com/SCUTjinchengli/DefenseTransformer. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Deep learning-accelerated computational framework based on Physics Informed Neural Network for the solution of linear elasticity.
- Author
-
Roy, Arunabha M., Bose, Rikhi, Sundararaghavan, Veera, and Arróyave, Raymundo
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *ELASTICITY , *LIGHTWEIGHT construction , *PARTIAL differential equations , *BENCHMARK problems (Computer science) - Abstract
The paper presents an efficient and robust data-driven deep learning (DL) computational framework developed for linear continuum elasticity problems. The methodology is based on the fundamentals of the Physics Informed Neural Networks (PINNs). For an accurate representation of the field variables, a multi-objective loss function is proposed. It consists of terms corresponding to the residual of the governing partial differential equations (PDE), constitutive relations derived from the governing physics, various boundary conditions, and data-driven physical knowledge fitting terms across randomly selected collocation points in the problem domain. To this end, multiple densely connected independent artificial neural networks (ANNs), each approximating a field variable, are trained to obtain accurate solutions. Several benchmark problems including the Airy solution to elasticity and the Kirchhoff–Love plate problem are solved. Performance in terms of accuracy and robustness illustrates the superiority of the current framework showing excellent agreement with analytical solutions. The present work combines the benefits of the classical methods depending on the physical information available in analytical relations with the superior capabilities of the DL techniques in the data-driven construction of lightweight, yet accurate and robust neural networks. The models developed herein can significantly boost computational speed using minimal network parameters with easy adaptability in different computational platforms. • A physics-aware deep learning-based accelerated computational framework has been developed for solving linear elasticity problems. • A multi-objective loss functional is proposed, when minimized, can accurately predict elastic field variables. • Several benchmark problems are solved illustrating the usefulness of the model showing excellent agreement with analytical solutions. • Current study demonstrates the applicability of data-driven enhancement using the transfer learning-based approach in reducing training time, simultaneously improving the accuracy of the model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Adaptive balancing of exploration and exploitation around the edge of chaos in internal-chaos-based learning.
- Author
-
Matsuki, Toshitaka and Shibata, Katsunari
- Subjects
- *
ARTIFICIAL neural networks , *EDGES (Geometry) , *ATTRACTORS (Mathematics) - Abstract
This paper addresses learning with exploration driven by chaotic internal dynamics of a neural network. Hoerzer et al. showed that a chaotic reservoir network (RN) can learn with exploration driven by external random noise and a sequential reward. In this paper, we demonstrate that a chaotic RN can learn without external noise because the output fluctuation originated from its internal chaotic dynamics functions as exploration. As learning progresses, the chaoticity decreases and the network can automatically switch from exploration mode to exploitation mode. Furthermore, the network can resume exploration when presented with a new situation. In addition, we found that even when the two parameters that influence the chaoticity are varied, learning performance always improves around the edge of chaos. From these results, we think that exploration is generated from internal chaotic dynamics, and exploitation appears in the process of forming attractors on the chaotic dynamics through learning. Consequently, exploration and exploitation are well-balanced around the edge of chaos, which leads to good learning performance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. Feature flow regularization: Improving structured sparsity in deep neural networks.
- Author
-
Wu, Yue, Lan, Yuan, Zhang, Luchan, and Xiang, Yang
- Subjects
- *
ARTIFICIAL neural networks , *IMAGE recognition (Computer vision) - Abstract
Pruning is a model compression method that removes redundant parameters and accelerates the inference speed of deep neural networks (DNNs) while maintaining accuracy. Most available pruning methods impose various conditions on parameters or features directly. In this paper, we propose a simple and effective regularization strategy to improve the structured sparsity and structured pruning in DNNs from a new perspective of evolution of features. In particular, we consider the trajectories connecting features of adjacent hidden layers, namely feature flow. We propose feature flow regularization (FFR) to penalize the length and the total absolute curvature of the trajectories, which implicitly increases the structured sparsity of the parameters. The principle behind FFR is that short and straight trajectories will lead to an efficient network that avoids redundant parameters. Experiments on CIFAR-10 and ImageNet datasets show that FFR improves structured sparsity and achieves pruning results comparable to or even better than those state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations.
- Author
-
Belomestny, Denis, Naumov, Alexey, Puchkin, Nikita, and Samsonov, Sergey
- Subjects
- *
ARTIFICIAL neural networks , *SMOOTHNESS of functions , *DERIVATIVES (Mathematics) , *STATISTICAL learning , *ANALYTIC functions , *POLYNOMIAL approximation - Abstract
This paper investigates the approximation properties of deep neural networks with piecewise-polynomial activation functions. We derive the required depth, width, and sparsity of a deep neural network to approximate any Hölder smooth function up to a given approximation error in Hölder norms in such a way that all weights of this neural network are bounded by 1. The latter feature is essential to control generalization errors in many statistical and machine learning applications. • Rates and complexity for smooth function approximation in Hölder norms by ReQU neural networks. • Explicit and uniform bounds for weights of the approximating neural network. • Exponential convergence rates for analytic functions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. SPIDE: A purely spike-based method for training feedback spiking neural networks.
- Author
-
Xiao, Mingqing, Meng, Qingyan, Zhang, Zongpeng, Wang, Yisen, and Lin, Zhouchen
- Subjects
- *
ARTIFICIAL neural networks , *ACTION potentials , *SUPERVISED learning , *APPROXIMATION error , *FLEXIBLE structures , *ENERGY industries - Abstract
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware. However, most supervised SNN training methods, such as conversion from artificial neural networks or direct training with surrogate gradients, require complex computation rather than spike-based operations of spiking neurons during training. In this paper, we study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method, implicit differentiation on the equilibrium state (IDE), for supervised learning with purely spike-based computation, which demonstrates the potential for energy-efficient training of SNNs. Specifically, we introduce ternary spiking neuron couples and prove that implicit differentiation can be solved by spikes based on this design, so the whole training procedure, including both forward and backward passes, is made as event-driven spike computation, and weights are updated locally with two-stage average firing rates. Then we propose to modify the reset membrane potential to reduce the approximation error of spikes. With these key components, we can train SNNs with flexible structures in a small number of time steps and with firing sparsity during training, and the theoretical estimation of energy costs demonstrates the potential for high efficiency. Meanwhile, experiments show that even with these constraints, our trained models can still achieve competitive results on MNIST, CIFAR-10, CIFAR-100, and CIFAR10-DVS. • Novel method with purely spike-based computation to train spiking neural networks. • Analysis of the approximation error of spikes and the method to reduce the error. • Much lower energy costs by low latency and firing sparsity during training. • Competitive performance on static and neuromorphic datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Tucker network: Expressive power and comparison.
- Author
-
Liu, Ye, Pan, Junjun, and Ng, Michael K.
- Subjects
- *
ARTIFICIAL neural networks , *COMPUTER vision , *DEEP learning , *MACHINE learning - Abstract
Deep neural networks have achieved great success in solving many machine learning and computer vision problems. In this paper, we propose a deep neural network called the Tucker network derived from the Tucker format and analyze its expressive power. The results demonstrate that the Tucker network has exponentially higher expressive power than the shallow network. In other words, a shallow network with an exponential width is required to realize the same score function as that computed by the Tucker network. Moreover, we discuss the expressive power between the hierarchical Tucker tensor network (HT network) and the proposed Tucker network. To generalize the Tucker network into a deep version, we combine the hierarchical Tucker format and Tucker format to propose a deep Tucker tensor decomposition. Its corresponding deep Tucker network is presented. Experiments are conducted on three datasets: MNIST, CIFAR-10 and CIFAR-100. The results experimentally validate the theoretical results and show that the Tucker network and deep Tucker network have better performance than the shallow network and HT network. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. DF-UDetector: An effective method towards robust deepfake detection via feature restoration.
- Author
-
Ke, Jianpeng and Wang, Lina
- Subjects
- *
ARTIFICIAL neural networks , *DEEPFAKES - Abstract
The abuse of deepfakes, a rising face swap technique, causes severe concerns about the authenticity of visual content and the dissemination of misinformation. To alleviate the threats imposed by deepfakes, a vast body of data-centric detectors has been deployed. However, the performance of these methods can be easily defected by degradations on deepfakes. To improve the performance of degradation deepfake detection, we creatively explore the recovery method in the feature space to preserve the artifacts for detection instead of directly in the image domain. In this paper, we propose a method, namely DF-UDetector, against degradation deepfakes by modeling the degraded images and transforming the extracted features to a high-quality level. To be specific, the whole model consists of three key components: an image feature extractor to capture image features, a feature transforming module to map the degradation features into a higher quality, and a discriminator to determine whether the feature map is of high quality enough. Extensive experiments on multiple video datasets show that our proposed model performs comparably or even better than state-of-the-art counterparts. Moreover, DF-UDetector outperforms by a small margin when detecting deepfakes in the wild. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Factorizing time-heterogeneous Markov transition for temporal recommendation.
- Author
-
Wen, Wen, Wang, Wencui, Hao, Zhifeng, and Cai, Ruichu
- Subjects
- *
FACTORIZATION , *ARTIFICIAL neural networks - Abstract
Temporal recommendation which recommends items to users with consideration of time information has been of wide interest in recent years. But huge event space, highly sparse user activities and time-heterogeneous dependency of temporal behaviors make it really challenging to learn the temporal patterns for high-quality recommendation. In this paper, aiming to handle these challenges, especially the time-heterogeneous characteristic of user's temporal behaviors, we proposed the Neural-based Time-heterogenous Markov Transition (NeuralTMT) model. Firstly, users' temporal behaviors are mathematically simplified as the third-order Markov transition tensors. And then a linear co-factorization model which learns the time-evolving user/item factors from these tensors is proposed. Furthermore, the model is extended to the neural-based learning framework (NeuralTMT), which is more flexible and able to capture time-heterogeneous temporal patterns via nonlinear neural network mappings and attention techniques. Extensive experiments on four datasets demonstrate that NeuralTMT performs significantly better than the state-of-the-art baselines. And the proposed method is fundamentally inspired by factorization techniques, which may also provide some interesting ideas on the connection of tensor factorization and neural-based sequential recommendation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Variable three-term conjugate gradient method for training artificial neural networks.
- Author
-
Kim, Hansu, Wang, Chuxuan, Byun, Hyoseok, Hu, Weifei, Kim, Sanghyuk, Jiao, Qing, and Lee, Tae Hee
- Subjects
- *
CONJUGATE gradient methods , *NEWTON-Raphson method , *CONVOLUTIONAL neural networks , *HESSIAN matrices , *ARTIFICIAL neural networks , *COMPUTER science - Abstract
Artificial neural networks (ANNs) have been widely adopted as general computational tools both in computer science as well as many other engineering fields. Stochastic gradient descent (SGD) and adaptive methods such as Adam are popular as robust optimization algorithms used to train the ANNs. However, the effectiveness of these algorithms is limited because they calculate a search direction based on a first-order gradient. Although higher-order gradient methods such as Newton's method have been proposed, they require the Hessian matrix to be semi-definite, and its inversion incurs a high computational cost. Therefore, in this paper, we propose a variable three-term conjugate gradient (VTTCG) method that approximates the Hessian matrix to enhance search direction and uses a variable step size to achieve improved convergence stability. To evaluate the performance of the VTTCG method, we train different ANNs on benchmark image classification and generation datasets. We also conduct a similar experiment in which a grasp generation and selection convolutional neural network (GGS-CNN) is trained to perform intelligent robotic grasping. After considering a simulated environment, we also test the GGS-CNN with a physical grasping robot. The experimental results show that the performance of the VTTCG method is superior to that of four conventional methods, including SGD, Adam, AMSGrad, and AdaBelief. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. A pruning feedforward small-world neural network based on Katz centrality for nonlinear system modeling.
- Author
-
Li, Wenjing, Chu, Minghui, and Qiao, Junfei
- Subjects
- *
NONLINEAR systems , *PRUNING , *BIOLOGICAL neural networks , *ARTIFICIAL neural networks , *CENTRALITY , *FEEDFORWARD neural networks , *ALGORITHMS - Abstract
Approaching to the biological neural network, small-world neural networks have been demonstrated to improve the generalization performance of artificial neural networks. However, the architecture of small-world neural networks is typically large and predefined. This may cause the problems of overfitting and time consuming, and cannot obtain an optimal network structure automatically for a given problem. To solve the above problems, this paper proposes a pruning feedforward small-world neural network (PFSWNN), and applies it to nonlinear system modeling. Firstly, a feedforward small-world neural network (FSWNN) is constructed according to the rewiring rule of Watts–Strogatz. Secondly, the importance of each hidden neuron is evaluated based on its Katz centrality. If the Katz centrality of a hidden neuron is below the predefined threshold, this neuron is considered to be an unimportant node and then merged with its most correlated neuron in the same hidden layer. The connection weights are trained using the gradient-based algorithm, and the convergence of the proposed PFSWNN is theoretically analyzed in this paper. Finally, the PFSWNN model is tested on some problems for nonlinear system modeling, including the approximation for a rapidly changing function, CATS missing time-series prediction, four benchmark problems of UCI public datasets and a practical problem for wastewater treatment process. Experimental results demonstrate that PFSWNN exhibits superior generalization performance by small-world property as well as the pruning algorithm, and the training time of PFSWNN is shortened owning to a compact structure. • The importance of the hidden nodes is measured using Katz centrality. • The small-worldness property and pruning improve the generalization performance of PFSWNN. • PFSWNN can self-adjust to a compact structure by the pruning algorithm. • The convergence of PFSWNN can be guaranteed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Quantum neural networks model based on swap test and phase estimation.
- Author
-
Li, Panchi and Wang, Bing
- Subjects
- *
ARTIFICIAL neural networks , *QUANTUM computing , *QUANTUM gates , *QUANTUM computers , *COMPUTER simulation , *QUBITS - Abstract
In this paper, a neural networks model for quantum computer is proposed. The core of this model is quantum neuron. Firstly, the inner product of the input qubits and the weight qubits is mapped to the phase of the control qubits in the neuron by the swap test technology, and then these phases are obtained by the phase estimation method, which are further used as the phase of the output qubit in the neuron. In this way, the mapping of input qubits to output qubit in quantum neuron is completed. The quantum neurons mentioned above can be used to construct quantum neural networks. In this paper, the quantum circuit for each operation step are given. The simulation results on the classic computer verify the effectiveness of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.