3,078 results
Search Results
2. LiSHT: Non-parametric Linearly Scaled Hyperbolic Tangent Activation Function for Neural Networks
- Author
-
Roy, Swalpa Kumar, Manna, Suvojit, Dubey, Shiv Ram, Chaudhuri, Bidyut Baran, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Gupta, Deep, editor, Bhurchandi, Kishor, editor, Murala, Subrahmanyam, editor, Raman, Balasubramanian, editor, and Kumar, Sanjeev, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Development of a Novel Method for Image Resizing Using Artificial Neural Network
- Author
-
Arabboev, Mukhriddin, Begmatov, Shohruh, Nosirov, Khabibullo, Tashmetov, Shakhzod, Saydiakbarov, Saydiakhrol, Chedjou, Jean Chamberlain, Kyamakya, Kyandoghere, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zaynidinov, Hakimjon, editor, Singh, Madhusudan, editor, Tiwary, Uma Shanker, editor, and Singh, Dhananjay, editor
- Published
- 2023
- Full Text
- View/download PDF
4. Classification of Dermoscopy Textures with an Ensemble Feedback of Multilayer Perceptron
- Author
-
Prabhu Chakkaravarthy, A., Saravanan, T. R., Udayakumar, Sridhar, Subasini, C. A., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Kottursamy, Kottilingam, editor, Bashir, Ali Kashif, editor, Kose, Utku, editor, and Uthra, Annie, editor
- Published
- 2023
- Full Text
- View/download PDF
5. q-Softplus Function: Extensions of Activation Function and Loss Function by Using q-Space
- Author
-
Abe, Motoshi, Kurita, Takio, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wallraven, Christian, editor, Liu, Qingshan, editor, and Nagahara, Hajime, editor
- Published
- 2022
- Full Text
- View/download PDF
6. Hyper-parameters Tuning of Artificial Neural Networks: An Application in the Field of Recommender Systems
- Author
-
Stergiopoulos, Vaios, Vassilakopoulos, Michael, Tousidou, Eleni, Corral, Antonio, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Chiusano, Silvia, editor, Cerquitelli, Tania, editor, Wrembel, Robert, editor, Nørvåg, Kjetil, editor, Catania, Barbara, editor, Vargas-Solar, Genoveva, editor, and Zumpano, Ester, editor
- Published
- 2022
- Full Text
- View/download PDF
7. Feasibility Study of Deep Frequency Modulation Synthesis
- Author
-
Hirata, Keiji, Hamanaka, Masatoshi, Tojo, Satoshi, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Kronland-Martinet, Richard, editor, Ystad, Sølvi, editor, and Aramaki, Mitsuko, editor
- Published
- 2021
- Full Text
- View/download PDF
8. Image Recognition Method of Defective Button Battery Base on Improved MobileNetV1
- Author
-
Yao, Tao, Zhang, Qi, Wu, Xingyu, Lin, Xiuyue, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Wang, Yongtian, editor, Li, Xueming, editor, and Peng, Yuxin, editor
- Published
- 2020
- Full Text
- View/download PDF
9. Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-performing Gradient Descent
- Author
-
Yegenoglu, Alper, Krajsek, Kai, Pier, Sandra Diaz, Herty, Michael, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nicosia, Giuseppe, editor, Ojha, Varun, editor, La Malfa, Emanuele, editor, Jansen, Giorgio, editor, Sciacca, Vincenzo, editor, Pardalos, Panos, editor, Giuffrida, Giovanni, editor, and Umeton, Renato, editor
- Published
- 2020
- Full Text
- View/download PDF
10. Sentiment Analysis and Prediction Using Neural Networks
- Author
-
Paliwal, Sneh, Khatri, Sunil Kumar, Sharma, Mayank, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Ghosh, Ashish, Series Editor, Luhach, Ashish Kumar, editor, Singh, Dharm, editor, Hsiung, Pao-Ann, editor, Hawari, Kamarul Bin Ghazali, editor, Lingras, Pawan, editor, and Singh, Pradeep Kumar, editor
- Published
- 2019
- Full Text
- View/download PDF
11. Efficient Low-Precision CORDIC Algorithm for Hardware Implementation of Artificial Neural Network
- Author
-
Raut, Gopal, Bhartiy, Vishal, Rajput, Gunjan, Khan, Sajid, Beohar, Ankur, Vishvakarma, Santosh Kumar, Barbosa, Simone Diniz Junqueira, Editorial Board Member, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Yuan, Junsong, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Sengupta, Anirban, editor, Dasgupta, Sudeb, editor, Singh, Virendra, editor, Sharma, Rohit, editor, and Kumar Vishvakarma, Santosh, editor
- Published
- 2019
- Full Text
- View/download PDF
12. Diabetes Detection Using Deep Neural Network
- Author
-
Mohapatra, Saumendra Kumar, Nanda, Susmita, Mohanty, Mihir Narayan, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Zelinka, Ivan, editor, Senkerik, Roman, editor, Panda, Ganapati, editor, and Lekshmi Kanthan, Padma Suresh, editor
- Published
- 2018
- Full Text
- View/download PDF
13. A Multilayer Feedforward Fuzzy Neural Network
- Author
-
Savran, Aydoğan, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Dough, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, and Savacı, F. Acar, editor
- Published
- 2006
- Full Text
- View/download PDF
14. Generalized Splitting 2D Flexible Activation Function
- Author
-
Vitagliano, Francesca, Parisi, Raffaele, Uncini, Aurelio, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Apolloni, Bruno, editor, Marinaro, Maria, editor, and Tagliaferri, Roberto, editor
- Published
- 2003
- Full Text
- View/download PDF
15. Structuring Interactive Systems Specifications for Executability and Prototypability
- Author
-
Navarre, David, Palanque, Philippe, Bastide, Rémi, Sy, Ousmane, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Palanque, Philippe, editor, and Paternò, Fabio, editor
- Published
- 2001
- Full Text
- View/download PDF
16. An empirical assessment of customer satisfaction of internet banking service quality – Hybrid model approach
- Author
-
Kashyap, Sachin, Gupta, Sanjeev, and Chugh, Tarun
- Published
- 2024
- Full Text
- View/download PDF
17. DP-ANN: A new Differential Private Artificial Neural Network with Application on Health data (Workshop Paper)
- Author
-
Indrajeet Kumar Sinha, Krishna Pratap Singh, and Shekhar Verma
- Subjects
Artificial neural network ,business.industry ,Computer science ,Computer Science::Neural and Evolutionary Computation ,Activation function ,010501 environmental sciences ,Base (topology) ,01 natural sciences ,03 medical and health sciences ,Error function ,0302 clinical medicine ,Differential privacy ,030212 general & internal medicine ,Noise (video) ,Artificial intelligence ,Differential (infinitesimal) ,business ,Laplace operator ,Computer Science::Cryptography and Security ,0105 earth and related environmental sciences - Abstract
Privacy of the individual data, especially in the Health data, is very sensitive and important. Privacy preserving Machine learning is emerging as one of the solutions for the security of data with the utility to create knowledge. In this paper, we have proposed a differential private artificial neural network (DP-ANN) and shows its application to predict the spread and the peak number of COVID-19 cases. We proposed a differential private artificial neural network (DP-ANN) in which laplacian noise has been introduced at activation function level and it has been compared with existing privacy ideas at error function and weights level of ANN. Results show that DP-ANN model with the private activation function produces the result similar to the base ANN model.
- Published
- 2020
- Full Text
- View/download PDF
18. Prediction of Fabric Drape Based on BP Neural Network Paper
- Author
-
Shuhui Xia and Xiujuan Fan
- Subjects
Quantitative Biology::Neurons and Cognition ,Artificial neural network ,Computer science ,Activation function ,Initial value problem ,Algorithm - Abstract
This paper studies the design of fabric drape prediction model based on BP neural network, and predicts the static and dynamic drape coefficients according to the number of warp and weft yarns, density of warp and weft yarns, weight per square meter and thickness of fabric. The effect of the number of intermediate layer neurons on the accuracy of neural network was studied. The influence of activation function on neural network prediction and initial value on local optimization. The simulation results show that the prediction results of fabric drape by BP neural network are good and meet the expected requirements.
- Published
- 2020
19. A Paper Currency Recognition System based on Neural Networks with Gaussian Functions and an Optimizing Method for Its Parameters on Way to Learning
- Author
-
Baiqing Sun and Fumiaki Takeda
- Subjects
Artificial neural network ,Computer science ,Gaussian ,Activation function ,Function (mathematics) ,Sigmoid function ,computer.software_genre ,Maxima and minima ,symbols.namesake ,symbols ,Gaussian function ,Data mining ,Gradient descent ,computer ,Algorithm - Abstract
In this paper, in order to improve rejection capabilities of the paper currency recognition system for unknown currency patterns on promise of ensuring recognition capabilities for known currency patterns, a feed-forward neural network (FNN) with Gaussian activation function is proposed. The proposed activation function is a ridge-like function. Moreover, a hybrid-learning algorithm for optimizing the width parameters of the Gaussian function is proposed. In the network the Gaussian activation function instead of the sigmoid function is employed in all units of hidden and output layers. The algorithm consists of two steps, one is exploring local minima by employing the gradient descent search, and the other is extricating the search from local minima, in which a random search with the downhill simplex method is employed. The results of simulation reveal the potential effectiveness of the proposed activation function and the algorithm. The system with the proposed activation function and the proposed algorithm can recognize known currency patterns and reject the unknown currency patterns effectively.
- Published
- 2006
20. Dynamic activation and enhanced image contour features for object detection.
- Author
-
Wu, Jun, Zhu, Jiahui, Tong, Xin, Zhu, Tianliang, Li, Tianyi, and Wang, Chunzhi
- Abstract
Object detection technology is a popular research direction which is widely used in areas such as autonomous driving and medical diagnosis. At this stage mobile devices often have limited storage resources to deploy large object detection networks and need to meet real-time requirements. This paper proposes a lightweight and efficient object detection model based on YOLOv4, first using the lightweight network GhostNet to extract image features and reduce the number of parameters and computation of the backbone structure; then combining AFmodule and Meta-ACON activation function to enhance the feature extraction capability of the backbone network, which strengthen the mode's ability to capture image spatial feature information; this paper also designs the RL-PAFPN feature fusion structure is with the Reslayer module to further improve the model's ability to extract and fuse image features. By comparing other mainstream object detection models, the YOLOv4-Ghost-AMR network in this paper has less computation and fewer parameters, and the accuracy of the model reaches 86.83%, which is suitable for deployment in mobile devices with limited storage. The model proposed in this paper can be applied to medical, traffic and fault detection fields, which changes the traditional manual detection method and saves manpower and time costs, achieving high precision real-time object detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Application of neural networks in spatial signal processing (invited paper)
- Author
-
Bratislav Milovanovic, Nebojsa Doncov, Zoran Stankovic, Marija Agatonovic, and Maja Sarevska
- Subjects
Signal processing ,Artificial neural network ,business.industry ,Computer science ,Computer Science::Neural and Evolutionary Computation ,Activation function ,Direction of arrival ,Perceptron ,Machine learning ,computer.software_genre ,Antenna array ,Radial basis function ,Artificial intelligence ,Antenna (radio) ,business ,computer ,Algorithm - Abstract
Neural networks (NNs) have proven to be a very powerful tool both for one-dimensional (1D) and two-dimensional (2D) direction of arrival (DOA) estimation. By avoiding complex and time-consuming mathematical calculations, NNs estimate DOAs almost instantaneously. This feature makes them very convenient for real-time applications. Further, unlike the well known MUSIC algorithm, neural network-based models provide accurate directions without additional calibration procedure of antenna array and a priori knowledge of the number of sources. In this review paper, the results achieved by the research group at the Faculty of Electronic Engineering in Nis are presented. The problem of DOA estimation of narrowband signals impinging upon different configurations of antenna arrays is addressed. Both Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) neural networks are considered, and their advantages and disadvantages are discussed. To improve the resolution of DOA estimates, sectorization model is introduced. As shown in this work, neural network-based models demonstrate high-resolution localization capabilities and much better efficiency than the MUSIC.
- Published
- 2012
22. Using artificial neural network for predicting and controlling the effluent chemical oxygen demand in wastewater treatment plant
- Author
-
Bekkari, Naceureddine and Zeddouri, Aziez
- Published
- 2019
- Full Text
- View/download PDF
23. Elastic Adaptively Parametric Compounded Units for Convolutional Neural Network.
- Author
-
Zhang, Changfan, Xu, Yifu, and Sheng, Zhenwen
- Subjects
CONVOLUTIONAL neural networks ,COMPUTER vision ,IMAGE recognition (Computer vision) - Abstract
The activation function introduces nonlinearity into convolutional neural network, which greatly promotes the development of computer vision tasks. This paper proposes elastic adaptively parametric compounded units to improve the performance of convolutional neural networks for image recognition. The activation function takes the structural advantages of two mainstream functions as the function's fundamental architecture. The SENet model is embedded in the proposed activation function to adaptively recalibrate the feature mapping weight in each channel, thereby enhancing the fitting capability of the activation function. In addition, the function has an elastic slope in the positive input region by simulating random noise to improve the generalization capability of neural networks. To prevent the generated noise from producing overly large variations during training, a special protection mechanism is adopted. In order to verify the effectiveness of the activation function, this paper uses CIFAR-10 and CIFAR-100 image datasets to conduct comparative experiments of the activation function under the exact same model. Experimental results show that the proposed activation function showed superior performance beyond other functions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Elementary proof of Funahashi's theorem.
- Author
-
MITSUO IZUKI, TAKAHIRO NOI, YOSHIHIRO SAWANO, and HIROKAZU TANAKA
- Subjects
FEEDFORWARD neural networks ,CONTINUOUS functions ,EUCLIDEAN geometry ,HARMONIC analysis (Mathematics) ,FOURIER analysis - Abstract
Funahashi established that the space of two-layer feedforward neural networks is dense in the space of all continuous functions defined over compact sets in n-dimensional Euclidean space. The purpose of this short survey is to reexamine the proof of Theorem 1 in Funahashi [3]. The Tietze extension theorem, whose proof is contained in the appendix, will be used. This paper is based on harmonic analysis, real analysis, and Fourier analysis. However, the audience in this paper is supposed to be researchers who do not specialize in these fields of mathematics. Some fundamental facts that are used in this paper without proofs will be collected after we present some notation in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. An analysis of weight initialization methods in connection with different activation functions forfeedforward neural networks.
- Author
-
Wong, Kit, Dornberger, Rolf, and Hanne, Thomas
- Abstract
The selection of weight initialization in an artificial neural network is one of the key aspects and affects the learning speed, convergence rate and correctness of classification by an artificial neural network. In this paper, we investigate the effects of weight initialization in an artificial neural network. Nguyen-Widrow weight initialization, random initialization, and Xavier initialization method are paired with five different activation functions. This paper deals with a feedforward neural network, consisting of an input layer, a hidden layer, and an output layer. The paired combination of weight initialization methods with activation functions are examined and tested and compared based on their best achieved loss rate in training. This work aims to better understand how weight initialization methods in neural networks, in combination with activation functions, affect the learning speed in comparison after a fixed number of training epochs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A classroom facial expression recognition method based on attention mechanism.
- Author
-
Jin, Huilong, Du, Ruiyan, Wen, Tian, Zhao, Jia, Shi, Lei, and Zhang, Shuang
- Subjects
FACIAL expression ,ARTIFICIAL neural networks ,FEATURE extraction ,ATTENTION ,CLASSROOMS - Abstract
Compared with other facial expression recognition, classroom facial expression recognition should pay more attention to the feature extraction of a specific region to reflect the attention of students. However, most features are extracted with complete facial images by deep neural networks. In this paper, we proposed a new expression recognition based on attention mechanism, where more attention would be paid in the channel information which have much relationship with the expression classification instead of depending on all channel information. A new classroom expression classification has also been concluded with considering the concentration. Moreover, activation function is modified to reduce the number of parameters and computations, at the same time, dropout regularization is added after the pool layer to prevent overfitting of the model. The experiments show that the accuracy of our method named Ixception has an maximize improvement of 5.25% than other algorithms. It can well meet the requirements of the analysis of classroom concentration. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Enhancement of Neural Network Performance with the Use of Two Novel Activation Functions: modExp and modExpm
- Author
-
Heena Kalim, Chug, Anuradha, and Singh, Amit Prakash
- Published
- 2024
- Full Text
- View/download PDF
28. modSwish: a new activation function for neural network
- Author
-
Kalim, Heena, Chug, Anuradha, and Singh, Amit Prakash
- Published
- 2024
- Full Text
- View/download PDF
29. Seatbelt Detection Algorithm Improved with Lightweight Approach and Attention Mechanism.
- Author
-
Qiu, Liankui, Rao, Jiankun, and Zhao, Xiangzhe
- Subjects
SEAT belts ,ALGORITHMS ,PETRI nets - Abstract
Precise and rapid detection of seatbelts is an essential research field for intelligent traffic management. In order to improve the detection precision of seatbelts and speed up algorithm inference velocity, a lightweight seatbelt detection algorithm is proposed. Firstly, by adding the G-ELAN module designed in this paper to the YOLOv7-tiny network, the optimization of construction and reduction of parameters are accomplished, and the ResNet is compressed with the channel pruning approach to decrease computational overheads. Then, the Mish activation function is utilized to replace the Leaky Relu in the neck to enhance the non-linear competence of the network. Finally, the triplet attention module is integrated into the model after pruning to make up for the underlying performance reduction caused by the previous stage and upgrade overall detection precision. The experimental results based on the self-built seatbelt dataset showed that, compared to the initial network, the Mean Average Precision (mAP) achieved by the proposed GM-YOLOv7 was improved by 3.8%, while the volume and the computation amount were lowered by 20% and 24.6%, respectively. Compared with YOLOv3, YOLOX, and YOLOv5, the mAP of GM-YOLOv7 increased by 22.4%, 4.6%, and 4.2%, respectively, and the number of computational operations decreased by 25%, 63%, and 38%, respectively. In addition, the accuracy of the improved RST-Net increased to 98.25%, while the parameter value was reduced by 48% compared to the basic model, effectively improving the detection performance and realizing a lightweight structure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. ASPP + -LANet: A Multi-Scale Context Extraction Network for Semantic Segmentation of High-Resolution Remote Sensing Images.
- Author
-
Hu, Lei, Zhou, Xun, Ruan, Jiachen, and Li, Supeng
- Subjects
FEATURE extraction ,IMAGE processing ,URBAN planning ,SPATIAL resolution ,IMAGE segmentation - Abstract
Semantic segmentation of remote sensing (RS) images is a pivotal branch in the realm of RS image processing, which plays a significant role in urban planning, building extraction, vegetation extraction, etc. With the continuous advancement of remote sensing technology, the spatial resolution of remote sensing images is progressively improving. This escalation in resolution gives rise to challenges like imbalanced class distributions among ground objects in RS images, the significant variations of ground object scales, as well as the presence of redundant information and noise interference. In this paper, we propose a multi-scale context extraction network, ASPP
+ -LANet, based on the LANet for semantic segmentation of high-resolution RS images. Firstly, we design an ASPP+ module, expanding upon the ASPP module by incorporating an additional feature extraction channel, redesigning the dilation rates, and introducing the Coordinate Attention (CA) mechanism so that it can effectively improve the segmentation performance of ground object targets at different scales. Secondly, we introduce the Funnel ReLU (FReLU) activation function for enhancing the segmentation effect of slender ground object targets and refining the segmentation edges. The experimental results show that our network model demonstrates superior segmentation performance on both Potsdam and Vaihingen datasets, outperforming other state-of-the-art (SOTA) methods. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
31. Design of Gudermannian Neuroswarming to solve the singular Emden–Fowler nonlinear model numerically
- Author
-
Zulqurnain Sabir, Muhammad Shoaib, Korhan Cengiz, Dumitru Baleanu, and Muhammad Asif Zahoor Raja
- Subjects
Original Paper ,Gudermannian function ,Fitness function ,Artificial neural network ,Heuristic (computer science) ,Computer science ,Particle swarm optimization ,Applied Mathematics ,Mechanical Engineering ,Activation function ,Aerospace Engineering ,Emden–Fowler ,Ocean Engineering ,Solver ,Nonlinear system ,Statistical analysis ,Control and Systems Engineering ,Robustness (computer science) ,Applied mathematics ,Active-set scheme ,Electrical and Electronic Engineering - Abstract
The current investigation is related to the design of novel integrated neuroswarming heuristic paradigm using Gudermannian artificial neural networks (GANNs) optimized with particle swarm optimization (PSO) aid with active-set (AS) algorithm, i.e., GANN-PSOAS, for solving the nonlinear third-order Emden–Fowler model (NTO-EFM) involving single as well as multiple singularities. The Gudermannian activation function is exploited to construct the GANNs-based differential mapping for NTO-EFMs, and these networks are arbitrary integrated to formulate the fitness function of the system. An objective function is optimized using hybrid heuristics of PSO with AS, i.e., PSOAS, for finding the weights of GANN. The correctness, effectiveness and robustness of the designed GANN-PSOAS are verified through comparison with the exact solutions on three problems of NTO-EFMs. The assessments on statistical observations demonstrate the performance on different measures for the accuracy, consistency and stability of the proposed GANN-PSOAS solver.
- Published
- 2021
32. Multivariate Perturbed Hyperbolic Tangent-Activated Singular Integral Approximation.
- Author
-
Anastassiou, George A.
- Subjects
SMOOTHNESS of functions ,QUANTITATIVE research ,DENSITY - Abstract
Here we study the quantitative multivariate approximation of perturbed hyperbolic tangent-activated singular integral operators to the unit operator. The engaged neural network activation function is both parametrized and deformed, and the related kernel is a density function on R N . We exhibit uniform and L p , p ≥ 1 approximations via Jackson-type inequalities involving the first L p modulus of smoothness, 1 ≤ p ≤ ∞ . The differentiability of our multivariate functions is covered extensively in our approximations. We continue by detailing the global smoothness preservation results of our operators. We conclude the paper with the simultaneous approximation and the simultaneous global smoothness preservation by our multivariate perturbed activated singular integrals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Enhancement of Neural Network Performance with the Use of Two Novel Activation Functions: modExp and modExpm.
- Author
-
Heena Kalim, Chug, Anuradha, and Singh, Amit Prakash
- Abstract
The paper introduces two novel activation functions known as modExp and modExp
m . The activation functions possess several desirable properties, such as being continuously differentiable, bounded, smooth, and non-monotonic. Our studies have shown that modExp and modExpm consistently outperform ReLU and other activation functions across a range of challenging datasets and complex models. Initially, the experiments involve training and classifying using a multi-layer perceptron (MLP) on benchmark data sets like the Diagnostic Wisconsin Breast Cancer and Iris Flower datasets. Both modExp and modExpm demonstrate impressive performance, with modExp achieving 94.15 and 95.56% and modExpm achieving 94.15 and 95.56% respectively, when compared to ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. In addition, a series of experiments were carried out on five different depths of deeper neural networks, ranging from five to eight layers, using MNIST datasets. The modExpm activation function demonstrated superior performance accuracy on various neural network configurations, achieving 95.56, 95.43, 94.72, 95.14, and 95.61% on wider 5 layers, slimmer 5 layers, 6 layers, 7 layers, and 8 layers respectively. The modExp activation function also performed well, achieving the second highest accuracy of 95.42, 94.33, 94.76, 95.06, and 95.37% on the same network configurations, outperforming ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. The results of the statistical feature measures show that both activation functions have the highest mean accuracy, the lowest standard deviation, the lowest Root Mean squared Error, the lowest variance, and the lowest Mean squared Error. According to the experiment, both functions converge more quickly than ReLU, which is a significant advantage in Neural network learning. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
34. Hybrid sigmoid activation function and transfer learning assisted breast cancer classification on histopathological images
- Author
-
Singh, Manoj Kumar and Chand, Satish
- Published
- 2024
- Full Text
- View/download PDF
35. On the universal approximation property of radial basis function neural networks
- Author
-
Ismayilova, Aysu and Ismayilov, Muhammad
- Published
- 2024
- Full Text
- View/download PDF
36. Application of Artificial Neural Network to Predict Biodiesel Yield from Waste Frying Oil Transesterification
- Author
-
Tri Wahyu Saputra, Agus Haryanto, Mareli Telaumbanua, and Amiera Citra Gita
- Subjects
Biodiesel ,ANN model ,Waste frying oil ,Transesterification ,Activation function ,Yield ,General Computer Science ,Artificial neural network ,General Chemical Engineering ,General Engineering ,Raw material ,Geotechnical Engineering and Engineering Geology ,Pulp and paper industry ,chemistry.chemical_compound ,chemistry ,Space and Planetary Science ,Biodiesel production ,Yield (chemistry) ,Methanol ,Mathematics - Abstract
Used frying oil (UFO) has a great potential as feedstock for biodiesel production. This study aims to develop an artificial neural network (ANN) model to predict biodiesel yield produced from base-catalyzed transesterification of UFO. The experiment was performed with 100 mL of UFO at three different molar ratios (oil:methanol) (namely 1:4, 1:5, and 1:6), conducted with reaction temperatures of 30 to 55oC (raised by 5oC), and reaction time of 0.25, 0.5, 1, 2, 3, 6, 8, and 10 minutes. Prediction model was based on ANN model consisting of three layers with 27 combinations of three activation functions (tansig, logsig, purelin). All activation function architectures were trained using Levenberg- Marquardt train type with 126 data set (87.5%) and learning rate of 0.001. Model validation used 18 data set (12.5%) measured at reaction time of 8 min. Results showed that two ANN models with activation function of logsig-purelin-logsig and purelin-logsig-tansig be the best with RRMSE of 2.41% and 2.44% with R2 of 0.9355 and 0.9391, respectively. Predictions of biodiesel yield using ANN models are significantly better than those of first-order kinetics.
- Published
- 2020
- Full Text
- View/download PDF
37. High-Performance Binocular Disparity Prediction Algorithm for Edge Computing.
- Author
-
Cheng, Yuxi, Song, Yang, Liu, Yi, Zhang, Hui, and Liu, Feng
- Subjects
EDGE computing ,DATA compression ,DISTRIBUTION costs ,COMPUTATIONAL complexity ,FORECASTING - Abstract
End-to-end disparity estimation algorithms based on cost volume deployed in edge-end neural network accelerators have the problem of structural adaptation and need to ensure accuracy under the condition of adaptation operator. Therefore, this paper proposes a novel disparity calculation algorithm that uses low-rank approximation to approximately replace 3D convolution and transposed 3D convolution, WReLU to reduce data compression caused by the activation function, and unimodal cost volume filtering and a confidence estimation network to regularize cost volume. It alleviates the problem of disparity-matching cost distribution being far away from the true distribution and greatly reduces the computational complexity and number of parameters of the algorithm while improving accuracy. Experimental results show that compared with a typical disparity estimation network, the absolute error of the proposed algorithm is reduced by 38.3%, the three-pixel error is reduced to 1.41%, and the number of parameters is reduced by 67.3%. The calculation accuracy is better than that of other algorithms, it is easier to deploy, and it has strong structural adaptability and better practicability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A Neural-Network-Based Watermarking Method Approximating JPEG Quantization.
- Author
-
Yamauchi, Shingo and Kawamura, Masaki
- Subjects
DIGITAL watermarking ,JPEG (Image coding standard) ,WATERMARKS ,RECURRENT neural networks ,BIT error rate ,IMAGE compression ,TANGENT function - Abstract
We propose a neural-network-based watermarking method that introduces the quantized activation function that approximates the quantization of JPEG compression. Many neural-network-based watermarking methods have been proposed. Conventional methods have acquired robustness against various attacks by introducing an attack simulation layer between the embedding network and the extraction network. The quantization process of JPEG compression is replaced by the noise addition process in the attack layer of conventional methods. In this paper, we propose a quantized activation function that can simulate the JPEG quantization standard as it is in order to improve the robustness against the JPEG compression. Our quantized activation function consists of several hyperbolic tangent functions and is applied as an activation function for neural networks. Our network was introduced in the attack layer of ReDMark proposed by Ahmadi et al. to compare it with their method. That is, the embedding and extraction networks had the same structure. We compared the usual JPEG compressed images and the images applying the quantized activation function. The results showed that a network with quantized activation functions can approximate JPEG compression with high accuracy. We also compared the bit error rate (BER) of estimated watermarks generated by our network with those generated by ReDMark. We found that our network was able to produce estimated watermarks with lower BERs than those of ReDMark. Therefore, our network outperformed the conventional method with respect to image quality and BER. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. ErfReLU: adaptive activation function for deep neural network.
- Author
-
Rajanand, Ashish and Singh, Pradeep
- Abstract
Recent research has found that the activation function (AF) plays a significant role in introducing non-linearity to enhance the performance of deep learning networks. Researchers recently started developing activation functions that can be trained throughout the learning process, known as trainable, or adaptive activation functions (AAF). Research on AAF that enhances the outcomes is still in its early stages. In this paper, a novel activation function ‘ErfReLU’ has been developed based on the erf function and ReLU. This function leverages the advantages of both the Rectified Linear Unit (ReLU) and the error function (erf). A comprehensive overview of activation functions like Sigmoid, ReLU, Tanh, and their properties have been briefly explained. Adaptive activation functions like Tanhsoft1, Tanhsoft2, Tanhsoft3, TanhLU, SAAF, ErfAct, Pserf, Smish, and Serf is also presented. Lastly, comparative performance analysis of 9 trainable activation functions namely Tanhsoft1, Tanhsoft2, Tanhsoft3, TanhLU, SAAF, ErfAct, Pserf, Smish, and Serf with the proposed one has been performed. These activation functions are used in MobileNet, VGG16, and ResNet models and their performance is evaluated on benchmark datasets such as CIFAR-10, MNIST, and FMNIST. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Target detection based on a new triple activation function.
- Author
-
Chen, Guanyu, Wang, Quanyu, Li, Xiang, and Zhang, Yanyun
- Abstract
As one of the important parts of Neural Network, activation function plays a very important role in model training in Neural Network. In this paper, the status quo, advantages and disadvantages of the existing common activation functions are analysed, and a new activation function is proposed and applied to target detection. To test the performance of the new activation function, this paper compares it with the ReLU activation functions on a variety of Neural Networks and data sets, and not only analyses the performance of the activation function itself but also verifies the effectiveness of the activation function in target detection. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. An analytical approach for unsupervised learning rate estimation using rectified linear units.
- Author
-
Chaoxiang Chen, Golovko, Vladimir, Kroshchanka, Aliaksandr, Mikhno, Egor, Chodyka, Marta, and Lichograj, Piotr
- Subjects
BOLTZMANN machine ,TRANSFER functions - Abstract
Unsupervised learning based on restricted Boltzmann machine or autoencoders has become an important research domain in the area of neural networks. In this paper mathematical expressions to adaptive learning step calculation for RBM with ReLU transfer function are proposed. As a result, we can automatically estimate the step size that minimizes the loss function of the neural network and correspondingly update the learning step in every iteration. We give a theoretical justification for the proposed adaptive learning rate approach, which is based on the steepest descent method. The proposed technique for adaptive learning rate estimation is compared with the existing constant step and Adam methods in terms of generalization ability and loss function. We demonstrate that the proposed approach provides better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. 基于分段线性激活的多任务行人目标检测识别算法 研究.
- Author
-
朱亚旋, 张达明, 尹荣彬, and 吴继超
- Abstract
Copyright of Automotive Digest is the property of Automotive Digest Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
43. Optimizing GANs for Cryptography: The Role and Impact of Activation Functions in Neural Layers Assessing the Cryptographic Strength.
- Author
-
Singh, Purushottam, Dutta, Sandip, and Pranav, Prashant
- Subjects
CRYPTOGRAPHY ,HYBRID systems ,GENERATIVE adversarial networks ,STRENGTH training ,MATHEMATICAL optimization - Abstract
Generative Adversarial Networks (GANs) have surfaced as a transformative approach in the domain of cryptography, introducing a novel paradigm where two neural networks, the generator (akin to Alice) and the discriminator (akin to Bob), are pitted against each other in a cryptographic setting. A third network, representing Eve, attempts to decipher the encrypted information. The efficacy of this encryption–decryption process is deeply intertwined with the choice of activation functions employed within these networks. This study conducted a comparative analysis of four widely used activation functions within a standardized GAN framework. Our recent explorations underscore the superior performance achieved when utilizing the Rectified Linear Unit (ReLU) in the hidden layers combined with the Sigmoid activation function in the output layer. The non-linear nature introduced by the ReLU provides a sophisticated encryption pattern, rendering the deciphering process for Eve intricate. Simultaneously, the Sigmoid function in the output layer guarantees that the encrypted and decrypted messages are confined within a consistent range, facilitating a straightforward comparison with original messages. The amalgamation of these activation functions not only bolsters the encryption strength but also ensures the fidelity of the decrypted messages. These findings not only shed light on the optimal design considerations for GAN-based cryptographic systems but also underscore the potential of investigating hybrid activation functions for enhanced system optimization. In our exploration of cryptographic strength and training efficiency using various activation functions, we discovered that the "ReLU and Sigmoid" combination significantly outperforms the others, demonstrating superior security and a markedly efficient mean training time of 16.51 s per 2000 steps. This highlights the enduring effectiveness of established methodologies in cryptographic applications. This paper elucidates the implications of these choices, advocating for their adoption in GAN-based cryptographic models, given the superior results they yield in ensuring security and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Deep Learning Techniques for Vehicle Detection and Classification from Images/Videos: A Survey.
- Author
-
Berwo, Michael Abebe, Khan, Asad, Fang, Yong, Fahim, Hamza, Javaid, Shumaila, Mahmood, Jabar, Abideen, Zain Ul, and M.S., Syam
- Subjects
DEEP learning ,IMAGE recognition (Computer vision) ,INTELLIGENT transportation systems ,TRAFFIC density ,TECHNOLOGICAL innovations ,TOLLS - Abstract
Detecting and classifying vehicles as objects from images and videos is challenging in appearance-based representation, yet plays a significant role in the substantial real-time applications of Intelligent Transportation Systems (ITSs). The rapid development of Deep Learning (DL) has resulted in the computer-vision community demanding efficient, robust, and outstanding services to be built in various fields. This paper covers a wide range of vehicle detection and classification approaches and the application of these in estimating traffic density, real-time targets, toll management and other areas using DL architectures. Moreover, the paper also presents a detailed analysis of DL techniques, benchmark datasets, and preliminaries. A survey of some vital detection and classification applications, namely, vehicle detection and classification and performance, is conducted, with a detailed investigation of the challenges faced. The paper also addresses the promising technological advancements of the last few years. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. A Universal Activation Function for Deep Learning.
- Author
-
Seung-Yeon Hwang and Jeong-Joon Kim
- Subjects
DEEP learning ,CONVOLUTIONAL neural networks ,ARTIFICIAL neural networks ,COGNITIVE ability ,LEARNING ability ,GENERATING functions - Abstract
Recently, deep learning has achieved remarkable results in fields that require human cognitive ability, learning ability, and reasoning ability. Activation functions are very important because they provide the ability of artificial neural networks to learn complex patterns through nonlinearity. Various activation functions are being studied to solve problems such as vanishing gradients and dying nodes that may occur in the deep learning process. However, it takes a lot of time and effort for researchers to use the existing activation function in their research. Therefore, in this paper, we propose a universal activation function (UA) so that researchers can easily create and apply various activation functions and improve the performance of neural networks. UA can generate new types of activation functions as well as functions like traditional activation functions by properly adjusting three hyperparameters. The famous Convolutional Neural Network (CNN) and benchmark dataset were used to evaluate the experimental performance of the UA proposed in this study. We compared the performance of the artificial neural network to which the traditional activation function is applied and the artificial neural network to which the UA is applied. In addition, we evaluated the performance of the new activation function generated by adjusting the hyperparameters of the UA. The experimental performance evaluation results showed that the classification performance of CNNs improved by up to 5% through the UA, although most of them showed similar performance to the traditional activation function. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. BİYOLOJİK ATIKSU ARITMA TESİSLERİNDE KARBON VE AZOT GİDERİM VERİMLERİNİN TAHMİNİ AMACIYLA YAPAY SİNİR AĞLARININ KULLANIMI
- Author
-
Neslihan Manav Demir
- Subjects
Sinc function ,Chemistry ,Activation function ,Chemical oxygen demand ,Total nitrogen ,Activated sludge model ,Pulp and paper industry ,Kjeldahl method ,Nitrogen removal ,Backpropagation - Abstract
Although Activated Sludge Model No. 1 (ASM1) was used for modelling biological nitrogen removal processes, estimation of input parameters required to run this model necessitates complicated laboratory analyses. In this study, the performance of Backpropagation Artificial Neural Networks (BPANN), which requires considerably less numbers of input parameters, in predicting chemical oxygen demand (COD), total Kjeldahl nitrogen (TKN), and total nitrogen (TN) removal efficiencies was tested. For this purpose, four activation functions were employed in BPANN. Results suggested that COD, TKN, and TN removal efficiencies in AO processes can be accurately estimated using BPANN, with the highest learning and prediction capacity when Sinc function is employed. The mean square errors (MSEs) with Sinc-BPANN were calculated as 2.50 × 10 -4 for COD removal efficiency, 4.15 × 10 -4 for TKN removal efficiency, and 2.65 × 10 -4 for TN removal efficiency. Therefore, the Sinc-BPANN is concluded to be an efficient tool for estimating nonlinear nature of COD, TKN, and TN removal efficiencies in AO processes using considerably less numbers of input parameters.
- Published
- 2017
47. XGL-T transformer model for intelligent image captioning.
- Author
-
Sharma, Dhruv, Dhiman, Chhavi, and Kumar, Dinesh
- Abstract
Image captioning extracts multiple semantic features from an image and integrates them into a sentence-level description. For efficient description of the captions, it becomes necessary to learn higher order interactions between detected objects and the relationship among them. Most of the existing systems take into account the first order interactions while ignoring the higher order ones. It is challenging to extract discriminant higher order semantics visual features in images with highly populated objects for caption generation. In this paper, an efficient higher order interaction learning framework is proposed using encoder-decoder based image captioning. A scaled version of Gaussian Error Linear Unit (GELU) activation function, x-GELU is introduced that controls the vanishing gradients and enhances the feature learning. To leverage higher order interactions among multiple objects, an efficient XGL Transformer (XGL-T) model is introduced that exploits both spatial and channel-wise attention by integrating four XGL attention modules in image encoder and one in Bilinear Long Short-Term Memory guided sentence decoder. The proposed model captures rich semantic concepts from objects, attributes, and their relationships. Extensive experiments are conducted on publicly available MSCOCO Karapathy test split and the best performance of the work is observed as 81.5 BLEU@1, 67.1 BLEU@2, 51.6 BLEU@3, 39.9 BLEU@4, 134 CIDEr, 59.9 ROUGE-L, 29.8 METEOR, 23.8 SPICE using CIDEr-D Score Optimization Strategy. The scores validate the significant improvements over state-of-the-art results. An ablation study is also carried out to support the experimental observations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Application to Activation Functions through Fixed-Circle Problems with Symmetric Contractions.
- Author
-
Anjum, Rizwan, Abbas, Mujahid, Safdar, Hira, Din, Muhammad, Zhou, Mi, and Radenović, Stojan
- Subjects
BANACH spaces ,CIRCLE - Abstract
In this paper, our main aim is to present innovative fixed-point theorems that provide solutions to the fixed-circle problem with symmetric contractions. We accomplish this by employing operator enrichment techniques within the context of Banach spaces. Furthermore, we demonstrate the practical application of these theorems by showcasing their relevance to the rectified linear unit (ReLU) activation function. By exploring the connection between fixed points and activation functions, our work contributes to a deeper understanding of the behavior and properties of these fundamental mathematical concepts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Development of a novel activation function based on Chebyshev polynomials: an aid for classification and denoising of images.
- Author
-
Deepthi, M., Vikram, G. N. V. R., and Venkatappareddy, P.
- Subjects
IMAGE denoising ,CHEBYSHEV polynomials ,IMAGE recognition (Computer vision) ,CONVOLUTIONAL neural networks ,ARTIFICIAL neural networks - Abstract
The main objective of this paper is to improve the efficiency and accuracy of convolutional neural network models for image classification and denoising tasks. The focus of the study is on enhancing the activation layer of these models, which is a critical component that determines the output of each neuron in the network. To achieve this goal, we propose a novel activation function based on Chebyshev polynomials, which is both data-driven and self-learnable. In addition to proposing the LIP model, the authors investigate its performance in approximating various nonlinearities and determine its Lipschitz bound. The study then evaluates the performance of the proposed activation function by conducting experiments on multiple datasets using different convolutional neural network models. The results show that the proposed activation function outperforms other activation layers and significantly enhances the accuracy of image classification and denoising tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Performance analysis of multimodal medical image fusion using AMT-DWT-based pre-processing and customized CNN for denoising
- Author
-
Ghosh, Tanima and N., Jayanthi
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.