19 results
Search Results
2. Prediction algorithm and simulation of tennis impact area based on semantic analysis of prior knowledge.
- Author
-
Ke, Yong, Liu, Zhen, and Liu, Sai
- Subjects
DEEP learning ,COMPUTER vision ,CONVOLUTIONAL neural networks ,PRIOR learning ,TENNIS ,ALGORITHMS - Abstract
The performance of target detection algorithms for manual features is gradually saturated, and the development of target detection has stagnated. Computer vision is a discipline that studies how to use computers to replace the human eye and use visual information and visual algorithms to automatically detect, recognize, and track target objects. Target detection has also been fully developed as a basis for solving advanced vision tasks such as target tracking, instance segmentation, image understanding, and behavior recognition. The theories of artificial intelligence and deep learning has gradually developed, making convolutional neural networks a hot spot of attention in the field of computer vision and image processing. Based on the work in the field of target detection, this paper proposes a target detection algorithm based on the prior knowledge of the impact area of the tennis ball. Our experiments and evaluations show that the algorithm can achieve higher detection accuracy and faster detection speed. Moreover, the proposed algorithm accurately tracks and judge the impact area of tennis sports, and improve the accuracy and reliability of the tennis impact. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Quantum OPTICS and deep self-learning on swarm intelligence algorithms for Covid-19 emergency transportation.
- Author
-
Drias, Habiba, Drias, Yassine, Houacine, Naila Aziza, Bendimerad, Lydia Sonia, Zouache, Djaafar, and Khennak, Ilyes
- Subjects
SWARM intelligence ,QUANTUM optics ,COVID-19 pandemic ,MACHINE learning ,ALGORITHMS ,DEEP learning - Abstract
In this paper, the quantum technology is exploited to empower the OPTICS unsupervised learning algorithm, which is a density-based clustering algorithm with numerous applications in the real world. We design an algorithm called Quantum Ordering Points To Identify the Clustering Structure (QOPTICS) and demonstrate that its computational complexity outperforms that of its classical counterpart. On the other hand, we propose a Deep self-learning approach for modeling the improvement of two Swarm Intelligence Algorithms, namely Artificial Orca Algorithm (AOA) and Elephant Herding Optimization (EHO) in order to improve their effectiveness. The deep self-learning approach is based on two well-known dynamic mutation operators, namely Cauchy mutation operator and Gaussian mutation operator. And in order to improve the efficiency of these algorithms, they are hybridized with QOPTICS and executed on just one cluster it yields. This way, both effectiveness and efficiency are handled. To evaluate the proposed approaches, an intelligent application is developed to manage the dispatching of emergency vehicles in a large geographic region and in the context of Covid-19 crisis in order to avoid an important loss in human lives. A theoretical model is designed to describe the issue mathematically. Extensive experiments are then performed to validate the mathematical model and evaluate the performance of the proposed deep self-learning algorithms. Comparison with a state-of-the-art technique shows a significant positive impact of hybridizing Quantum Machine Learning (QML) with Deep Self Learning (DSL) on solving the Covid-19 EMS transportation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning.
- Author
-
Liu, Xiyuan, Wu, Jia, and Chen, Senpeng
- Subjects
MACHINE learning ,MACHINE performance ,REINFORCEMENT learning ,DEEP learning ,PREDICTION models ,ALGORITHMS - Abstract
Hyperparameter optimization plays a significant role in the overall performance of machine learning algorithms. However, the computational cost of algorithm evaluation can be extremely high for complex algorithm or large dataset. In this paper, we propose a model-based reinforcement learning with experience variable and meta-learning optimization method to speed up the training process of hyperparameter optimization. Specifically, an RL agent is employed to select hyperparameters and treat the k-fold cross-validation result as a reward signal to update the agent. To guide the agent's policy update, we design an embedding representation called "experience variable" and dynamically update it during the training process. Besides, we employ a predictive model to predict the performance of machine learning algorithm with the selected hyperparameters and limit the model rollout in short horizon to reduce the impact of the inaccuracy of the model. Finally, we use the meta-learning technique to pre-train the model for fast adapting to a new task. To prove the advantages of our method, we conduct experiments on 25 real HPO tasks and the experimental results show that with the limited computational resources, the proposed method outperforms the state-of-the-art Bayesian methods and evolution method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. A novel clustering algorithm based on multi-layer features and graph attention networks.
- Author
-
Hou, Haiwei, Ding, Shifei, Xu, Xiao, and Ding, Ling
- Subjects
DEEP learning ,ALGORITHMS ,GRAPH algorithms - Abstract
Clustering is a fundamental task in the field of data analysis. With the development of deep learning, deep clustering focuses on learning meaningful representation with neural networks. Ensemble clustering algorithms combine multiple base partitions into a robust and better consensus clustering. Current deep ensemble clustering algorithms usually neglect shallow and original features. Besides, rarel algorithms use graph attention networks to explore clustering structures. This paper proposes a novel Clustering algorithm based on Multi-layer Features and Graph attention Networks (CMFGN). CMFGN obtains multi-layer features through the hierarchical convolutional layers. Moreover, CMFGN combines the co-association matrix with original features as the Graph Attention Networks (GAT) input to obtain consensus clustering, which reuses original information and leverages GAT to inherit a good clustering structure. Extensive experimental results show that CMFGN remarkably outputs competitive methods on four challenging image datasets. In particular, CMFGN achieves the ACC of 82.14% on the Digits dataset, which is almost up to 6% performance improvement compared with the best baseline. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Region of interest-based predictive algorithm for subretinal hemorrhage detection using faster R-CNN.
- Author
-
Suchetha, M., Ganesh, N. Sai, Raman, Rajiv, and Dhas, D. Edwin
- Subjects
ALGORITHMS ,CONVOLUTIONAL neural networks ,DEEP learning ,MACULAR edema ,OPTICAL coherence tomography ,VISION disorders - Abstract
Macular edema (ME) is an essential sort of macular issue caused due to the storing of fluid underneath the macula. Age-related Macular Degeneration (AMD) and diabetic macular edema (DME) are the two customary visual contaminations that can lead to fragmentary or complete vision loss. This paper proposes a deep learning-based predictive algorithm that can be used to detect the presence of a Subretinal hemorrhage. Region Convolutional Neural Network (R-CNN) and faster R-CNN are used to develop the predictive algorithm that can improve the classification accuracy. This method initially detects the presence of Subretinal hemorrhage, and it then segments the Region of Interest (ROI) by a semantic segmentation process. The segmented ROI is applied to a predictive algorithm which is derived from the Fast Region Convolutional Neural Network algorithm, that can categorize the Subretinal hemorrhage as responsive or non-responsive. The dataset, provided by a medical institution, comprised of optical coherence tomography (OCT) images of both pre- and post-treatment images, was used for training the proposed Faster Region Convolutional Neural Network (Faster R-CNN). We also used the Kaggle dataset for performance comparison with the traditional methods that are derived from the convolutional neural network (CNN) algorithm. The evaluation results using the Kaggle dataset and the hospital images provide an average sensitivity, selectivity, and accuracy of 85.3%, 89.64%, and 93.48% respectively. Further, the proposed method provides a time complexity in testing as 2.64s, which is less than the traditional schemes like CNN, R-CNN, and Fast R-CNN. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Ensemble classification for intrusion detection via feature extraction based on deep Learning.
- Author
-
Yousefnezhad, Maryam, Hamidzadeh, Javad, and Aliannejadi, Mohammad
- Subjects
DEEP learning ,FEATURE extraction ,ARTIFICIAL neural networks ,SUPPORT vector machines ,ALGORITHMS ,DECISION trees - Abstract
An intrusion detection system is a security system that aims to detect sabotage and intrusions on networks to inform experts of the attack and abuse of the network. Different classification methods have been used in the intrusion detection systems such as fuzzy, genetic algorithms, decision trees, artificial neural networks, and support vector machines. Moreover, ensemble classifiers have shown more robust and effective performance for various tasks in the field. In this paper, we adopt ensemble models in order to improve the performance of intrusion detection and, at the same time, decrease the false alarm rate. We use kNN for multi-class classification, as well as SVM to approach the classification problem in normal-based detection. In order to combine multiple outputs, we use the Dempster–Shafer method in which there is the possibility of explicit retrieval of uncertainty. Moreover, we utilize deep learning for extracting features to train the samples, selected by the sample selection algorithm based on ensemble margin. We compare our results with state-of-the-art methods on benchmarking datasets such as UNSW-NB15, CICIDS2017, and NSL-KDD. Our proposed method indicates the superiority in terms of prominent metrics Accuracy, Precision, Recall, and F-measure. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Optimal artificial neural network-based data mining technique for stress prediction in working employees.
- Author
-
Anitha, S. and Vanitha, M.
- Subjects
JOB stress ,DEEP learning ,DATA mining ,ARTIFICIAL neural networks ,WORK environment ,JOB performance ,ALGORITHMS - Abstract
Depression has become a common issue among IT industry professionals today. Lifestyle changes and new work culture increase the risk of depression among employees. Various companies and organizations offer mental health plans and try to pacify the work environment. However, the problem is already out of control. This research paper proposes an effective deep learning model for stress prediction among working employees with the help of lion optimization-based optimal artificial neural network (OANN) model. Here, the features are selected using optimal ANN technique and the diseases are predicted using lion optimization method. ANN technique eliminates inappropriate and unnecessary attributes in a significant manner, once the information on calculated characteristics and weight is disseminated to lion optimization classifier. The test results inferred that the artificial neural network is highly efficient than the current OANN algorithm method, based on lion optimization. The study evaluated the data and found that the performance of employees working under normal conditions was higher when compared to the performance of employees who work under stress. Furthermore, attitude-coping efforts may be a cognitive behavioral mechanism, which explains how workload is related to courage and work performance of employees with high stress level. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. A distributed parallel training method of deep belief networks.
- Author
-
Shi, Guang, Zhang, Jiangshe, Zhang, Chunxia, and Hu, Junying
- Subjects
DISTRIBUTED computing ,DEEP learning ,ALGORITHMS ,PARALLEL programming ,MODELS & modelmaking ,MACHINE learning ,COMPUTER workstation clusters - Abstract
Nowadays, it has become well known that efficient training of deep neural networks plays a vital role in various successful applications. To achieve this goal, it is impractical to use only one computer, especially when the scale of models is large and some efficient computing resources are available. In this paper, we present a distributed parallel computing framework for training deep belief networks (DBNs) by employing the great power of high-performance clusters (i.e., a system consists of many computers). Motivated by the greedy layer-wise learning algorithm of DBNs, the whole training process is divided layer by layer and distributed to different machines. At the same time, rough representations are exploited to parallelize the training process. By conducting experiments on several large-scale real datasets, the novel algorithms are shown to significantly accelerate the training speed of DBNs while achieving better or competitive prediction accuracy in comparison with the original algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. EEG signal classification using LSTM and improved neural network algorithms.
- Author
-
Nagabushanam, P., Thomas George, S., and Radha, S.
- Subjects
SIGNAL classification ,RADIAL basis functions ,BRAIN-computer interfaces ,MACHINE learning ,ELECTROENCEPHALOGRAPHY ,DEEP learning ,ALGORITHMS - Abstract
Neural network (NN) finds role in variety of applications due to combined effect of feature extraction and classification availability in deep learning algorithms. In this paper, we have chosen SVM, logistic regression machine learning algorithms and NN for EEG signal classification. Two-layer LSTM and four-layer improved NN deep learning algorithms are proposed to improve the performance in EEG classification. Novelty lies in one-dimensional gradient descent activation functions with radial basis operations used in the initial layers of improved NN which help in achieving better performance. Statistical features namely mean, standard deviation, kurtosis and skewness are extracted for input EEG collected from Bonn database and then applied for various classification techniques. Accuracy, precision, recall and F1 score are the performance metrics used for analyzing the algorithms. Improved NN and LSTM give better performance compared to all other architectures. The simulations are carried out with variety of activation functions, optimizers and loss models to analyze the performance using Python in keras. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. An autoencoder-based spectral clustering algorithm.
- Author
-
Li, Xinning, Zhao, Xiaoxiao, Chu, Derun, and Zhou, Zhiping
- Subjects
LAPLACIAN matrices ,DEEP learning ,ALGORITHMS ,K-means clustering ,MATRIX decomposition ,COMPUTATIONAL complexity - Abstract
Spectral clustering algorithm suffers from high computational complexity due to the eigen decomposition of Laplacian matrix and large similarity matrix for large-scale datasets. Some researches explore the possibility of deep learning in spectral clustering and propose to replace the eigen decomposition with autoencoder. K-means clustering is generally used to obtain clustering results on the embedding representation, which can improve efficiency but further increase memory consumption. An efficient spectral algorithm based on stacked autoencoder is proposed to solve this issue. In this paper, we select the representative data points as landmarks and use the similarity of landmarks with all data points as the input of autoencoder instead of similarity matrix of the whole datasets. To further refine clustering result, we combine learning the embedding representation and performing clustering. Clustering loss is used to update the parameters of autoencoder and cluster centers simultaneously. The reconstruction loss is also included to prevent the distortion of embedding space and preserve the local structure of data. Experiments on several large-scale datasets validate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. Deep learning algorithm development for river flow prediction: PNP algorithm.
- Author
-
Bak, Gwiman and Bae, Youngchul
- Subjects
DEEP learning ,MACHINE learning ,STREAMFLOW ,NATURAL disasters ,STANDARD deviations ,ALGORITHMS - Abstract
Deep learning algorithms developed in recent decades have performed well in prediction and classification using accumulated big data. However, as climate change has recently become a more serious global problem, natural disasters are occurring frequently. When analyzing natural disasters from the perspective of a data analyst, they are considered as outliers, and the ability to predict outliers (natural disasters) using deep learning algorithms based on big data acquired by computers is limited. To predict natural disasters, deep learning algorithms must be enhanced to be able to predict outliers based on information such as the correlation between the input and output. Thus, algorithms that specialize in one field must be developed, and specialized algorithms for abnormal values must be developed to predict natural disasters. Therefore, considering the correlation between the input and output, we propose a positive and negative perceptron (PNP) algorithm to predict the flow rate of rivers using climate change-sensitive precipitation. The PNP algorithm consists of a hidden deep learning layer composed of positive and negative neurons. We built deep learning models using the PNP algorithm to predict the flow of three rivers. We also built comparative deep learning models using long short-term memory (LSTM) to validate the performance of the PNP algorithm. We compared the predictive performance of each model using the root mean square error and symmetric mean absolute percentage error and demonstrated that it performed better than the LSTM algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Fake opinion detection in an e-commerce business based on a long-short memory algorithm.
- Author
-
Alsharif, Nizar
- Subjects
RECURRENT neural networks ,ELECTRONIC commerce ,CONSUMERS' reviews ,WORD frequency ,ALGORITHMS ,DEEP learning - Abstract
Online fake opinions, in the form of misleading reviews, about products or services are harmful and impact consumers' decisions to purchase. Most consumers today, who are buying their needs over the Internet, are checking online reviews first to get detailed experiences of previous customers about products or services. However, some e-businesses encourage individuals to write fake reviews about products or services, in exchange for money or free products, to compete with other competitors. These people are regarded as fraudster reviewers, and the reviews they write are known as fake reviews. In the current study, we considered the issue of fake opinion identification in e-commerce businesses based on deep learning recurrent neural network long short-term memory (RNN-LSTM). We performed our experiment using a standard Yelp product review dataset. We used a linguistic inquiry and word count dictionary to extract additional linguistic features from the review texts, which can help distinguish between real and fake reviews. Instances of these features include: the authenticity of the review's text, the analytical thinking of the reviewer, negative words, positive words, and personal pronouns. The proposed RNN-LSTM model reports a better performance for the classification of the reviews into either fake or real categories, achieving results of 98% in terms of accuracy and F1-score. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Decode after filtering: a network for camouflage object segmentation.
- Author
-
Zhang, Congwei, Li, Xiaodong, and Li, Xinde
- Subjects
MODULAR coordination (Architecture) ,DEEP learning ,MULTIPLICATION ,ALGORITHMS ,FEATURE extraction - Abstract
As a derivative task of object segmentation, camouflage object segmentation has the difficulties of redundant complex information and anti-detection objects. Most object segmentation algorithms are dedicated to improving the structure of the feature extraction and fusion modules, but the processing of complex redundant information is not sufficient, which makes them unable to segment the camouflage objects well. Aiming at the data characteristics of the camouflage object, we propose a novel structure of fully convolutional network called DAFNet, which mainly consists of feature filter module (FFM). FFM is formed by multi-path dilated convolution through multiplication, which "filters" the redundant features flowing in the network to improve the network's segmentation performance on camouflage objects. We also design an attention module based on Gaussian convolution called Gaussian attention module (GAM), which is used to refine the rough predicted map to further improve the output quality. Experiments on existing camouflage object datasets show that the DAFNet can achieve state-of-the-art performance on camouflage object segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Zone scheduling optimization of pumps in water distribution networks with deep reinforcement learning and knowledge-assisted learning.
- Author
-
Xu, Jiahui, Wang, Hongyuan, Rao, Jun, and Wang, Jingcheng
- Subjects
DEEP learning ,WATER distribution ,WATER pumps ,REWARD (Psychology) ,REINFORCEMENT learning ,ALGORITHMS - Abstract
This article studies the pump scheduling optimization problem in water distribution networks (WDNs) through a novel algorithm that combines knowledge learning and deep reinforcement learning. The optimization problem is modeled as a Markov decision process by taking three objectives in pressure management into consideration. Knowledge-assisted learning is incorporated into the reinforcement learning framework (KA-RL) to help evaluate the state value and guide the design of reward function, since the proposed KA-RL framework leverages the notion that historical data of WDNs could be utilized to produce optimal trajectories with respect to parametric variations. The knowledge-assisted proximal policy optimization (KA-PPO) algorithm, which only uses nodal pressure data, is proposed based on the KA-RL framework to address the arbitrary WDN topology and time-varying water demand. The effectiveness and applicability of the proposed algorithm are illustrated by virtue of the 22-node network with two pumps in a pump station. Empirical results demonstrate that KA-PPO works well in practice and compares favorably to Nelder–Mead method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Deep learning-based soft computing model for image classification application.
- Author
-
Revathi, M., Jeya, I. Jasmine Selvakumari, and Deepa, S. N.
- Subjects
SOFT computing ,DEEP learning ,SWARM intelligence ,COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
The growth of swarm intelligence approaches and machine learning models in the field of medical image processing is extravagant, and the applicability of these approaches for various types of cancer classification has as well grown in the recent years. Considering the growth of these machine learning models, in this work attempt is taken to develop an optimized deep learning neural network classifier for classifying the nodule tissues in the lung cancer images which is an important application in biomedical area. The optimized model developed is the hybrid version of adaptive multi-swarm particle swarm optimizer with the new improved firefly algorithm resulting in better exploration and exploitation mechanism to determine near-optimal solutions. Multi-swarm particle swarm optimizer (MSPSO) possesses strong exploration capability due to its regrouping schedule nature, and the improved firefly algorithm (ImFFA) possesses better exploitation mechanism due to its inherit attractiveness and intensity feature. At this juncture, the new adaptive MSPSO–ImFFA is applied to the deep learning neural classifier to overcome the local and global minima occurrences and premature convergence by tuning its weight values. As a result, in this work the new adaptive MSPSO–ImFFA-based deep learning neural network classifier is employed to classify the lung cancer tissues of the considered lung computed tomography images. Results obtained prove the effectiveness of the deep learning classifier for the considered lung image sample datasets in comparison with the other methods compared from the previous literature works. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. A deep learning approach for effective intrusion detection in wireless networks using CNN.
- Author
-
Riyaz, B. and Ganapathy, Sannasi
- Subjects
CONVOLUTIONAL neural networks ,DEEP learning ,DATA mining ,FEATURE selection ,ALGORITHMS ,RANDOM fields ,DATA transmission systems - Abstract
Security is playing a major role in this Internet world due to the rapid growth of Internet users. The various intrusion detection systems were developed by many researchers in the past to identify and detect the intruders using data mining techniques. However, the existing systems are not able to achieve sufficient detection accuracy when using the data mining. For this purpose, we propose a new intrusion detection system to provide security in data communication by identifying and detecting the intruders effectively in wireless networks. Here, we propose a new feature selection algorithm called conditional random field and linear correlation coefficient-based feature selection algorithm to select the most contributed features and classify them using the existing convolutional neural network. The experiments have been conducted for evaluating the proposed intrusion detection system that achieves 98.88% as overall detection accuracy. The tenfold cross-validation has been done for evaluating the performance of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Enhanced pedestrian detection using optimized deep convolution neural network for smart building surveillance.
- Author
-
Kim, Bubryur, Yuvaraj, N., Sri Preethaa, K. R., Santhosh, R., and Sabari, A.
- Subjects
CONVOLUTIONAL neural networks ,INTELLIGENT buildings ,DEEP learning ,MACHINE learning ,PEDESTRIANS ,ALGORITHMS - Abstract
Pedestrian detection and tracking is a critical task in the area of smart building surveillance. Due to advancements in sensors, the architects concentrate in construction of smart buildings. Pedestrian detection in smart building is greatly challenged by the image noises by various external environmental parameters. Traditional filter-based techniques for image classification like histogram of oriented gradients filters and machine learning algorithms suffer to perform well for huge volume of pedestrian input images. The advancements in deep learning algorithms perform exponentially good in handling the huge volume of image data. The current study proposes a pedestrian detection model based on deep convolution neural network (CNN) for classification of pedestrians from the input images. Proposed optimized version of VGG-16 architecture is evaluated for pedestrian detection on the INRIA benchmarking dataset consisting of 227 × 227 pixel images. The proposed model achieves an accuracy of 98.5%. It was found that proposed model performs better than the other pretrained CNN architectures and other machine learning models. Pedestrians are reasonably detected and the performance of the proposed algorithm is validated. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. Soft computing approaches for character credential and word prophecy analysis with stone encryptions.
- Author
-
Vani, V. and Ananthalakshmi, S. R.
- Subjects
SOFT computing ,CONVOLUTIONAL neural networks ,PATTERN recognition systems ,ALGORITHMS ,MACHINE learning ,DEEP learning ,FEATURE extraction - Abstract
The uniform corpus of untranslated script is a preliminary stage for computational epigraphy. Mechanizing this process through deep learning algorithms will be an essential support to the epigraphical research. Our proposed system based on soft computing techniques focuses on the progression of recognizing the eleventh-century ancient Tamil character and converting them into current-century word form. Initially, the system is implemented by performing preprocessing steps followed by image segmentation. The decomposed image undergoes a hybrid feature extraction technique along with Chi-square test to check whether entire pixel in image of Zernike is bounded inside the unit circle or not, whereas ANOVA method is used for testing the significant difference between HOG feature and zoning feature. These functions are subjected to image classification and proceeded with character recognition using convolutional neural networks. Finally, the identified character is progressed into word form with the help of boggle algorithm. The hybrid feature extraction along with convolutional neural networks is achieved with 92.78% of recognition rate accurately. Our experiment shows a large perspective of deep learning algorithms in automatic epigraphy. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.