16,217 results on '"soft computing"'
Search Results
2. Aquila Optimizer for Hyperparameter Metaheuristic Optimization in ELM
- Author
-
Vasquez-Iglesias, Philip, Zabala-Blanco, David, Pizarro, Amelia E., Fuentes-Concha, Juan, Gonzalez, Paulo, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Hernández-García, Ruber, editor, Barrientos, Ricardo J., editor, and Velastin, Sergio A., editor
- Published
- 2025
- Full Text
- View/download PDF
3. SDF-FuzzIA: A Fuzzy-Ontology Based Plug-in for the Intelligent Analysis of Geo-Thematic Data
- Author
-
Filippone, Giuseppe, La Rosa, Gianmarco, Tabacchi, Marco Elio, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Destercke, Sébastien, editor, Martinez, Maria Vanina, editor, and Sanfilippo, Giuseppe, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Soft Computing Paradigms for Load Balancing in Cloud Computing
- Author
-
Ghafir, Shabina, Alam, M. Afshar, Alankar, Bhavya, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Rawat, Sanyog, editor, Kumar, Arvind, editor, Raman, Ashish, editor, Kumar, Sandeep, editor, and Pathak, Parul, editor
- Published
- 2025
- Full Text
- View/download PDF
5. A Time-Efficient and Effective Image Contrast Enhancement Technique Using Fuzzification and Defuzzification
- Author
-
Rahman, Hafijur, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Mahmud, Mufti, editor, Kaiser, M. Shamim, editor, Bandyopadhyay, Anirban, editor, Ray, Kanad, editor, and Al Mamun, Shamim, editor
- Published
- 2025
- Full Text
- View/download PDF
6. A Comprehensive Review of Soft Computing Enabled Techniques for IoT Security: State-of-the-Art and Challenges Ahead
- Author
-
Parabrahmachari, Sriram, Narayanasamy, Srinivasan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Kumar, Amit, editor, Gunjan, Vinit Kumar, editor, Senatore, Sabrina, editor, and Hu, Yu-Chen, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Prediction of Mode-I Fracture Toughness of the ISRM-Suggested Semi-Circular Bend Rock Specimen Using ANN and Optimized ANN Models.
- Author
-
Ogunsola, Nafiu Olanrewaju, Lawal, Abiodun Ismail, Kim, Gyeonggyu, Kim, Hanlim, and Cho, Sangho
- Subjects
- *
METAHEURISTIC algorithms , *FRACTURE toughness , *ARTIFICIAL neural networks , *SOFT computing , *CIVIL engineering - Abstract
A great understanding of the fracture behavior of rocks is very important for ensuring the successful and efficient design, implementation, completion, and structural stability of important large-scale mining, tunneling, oil and gas, and civil engineering projects. This study employed four soft computing models of an artificial neural network trained using the Levenberg–Marquardt algorithm (ANN–LM), grasshopper optimization algorithm-optimized ANN (ANN–GOA), salp swarm algorithm-optimized ANN (ANN–SSA), and arithmetic operation algorithm-optimized ANN (ANN–AOA) to predict the Mode-I fracture toughness (KIc) of rock. For this purpose, a database comprising 121 experimental datasets obtained from the KIc test on a semi-circular bend (SCB) rock samples were used to train and validate the models. Four important parameters affecting KIc, namely, the uniaxial tensile strength, disc specimen radius and thickness, and notch or crack length, were selected as the input parameters. The ANN–GOA 4-9-1 model was adjudged to be the optimum of the generated KIc models as determined by the error metrics used to evaluate model performance. The ANN–GOA 4-9-1 had the lowest error metrics and highest coefficient of correlation for the overall dataset, with R = 0.98498, MSE = 0.0036, VAF = 97.02%, and a20-index = 0.96694. To ensure easy implementation of the optimum ANN–GOA 4-9-1, the model was transformed into a tractable closed-form explicit equation. Furthermore, the impact of each of the four KIc effective parameters on predicted KIc is evaluated and the Brazilian tensile strength and rock specimen radius are determined to be the most sensitive parameters to KIc. Hence, the proposed models can provide a robust and functional reliable alternative to the laborious and costly experimental method for the determination of KIc of rocks. Highlights: An ANN-based model for KIc of SCB specimen prediction is presented. 121 experimental data of SCB specimen was used for model development. Transformation of the optimal ANN-based model into closed-form equation. Variable importance analysis was performed to evaluate the impacts of predictors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Prediction of operating speed on horizontal curves of two-lane rural highways using artificial intelligence.
- Author
-
Tottadi, K. K. and Mehar, A.
- Subjects
- *
ARTIFICIAL neural networks , *RANDOM forest algorithms , *ARTIFICIAL intelligence , *COMMERCIAL vehicles , *SOFT computing - Abstract
Horizontal geometric characteristics have significant impact on vehicle operating speed of vehicles on two-lane rural highways. Majority of the studies have used the conventional approach of modelling, found to he location specific and provide falsejudgement of determining operating speed. Thus, it becomes important to apply methods based on artificial intelligent for predicting operating speed of vehicles on two-lane roads under mixed traffic conditions. Field data was collected on 40 different locations (curves and tangent sections) that includes free speed of vehicles and geometric parameters. Geometric parameters such as curve radius, curve length, deflection angle, degree of curvature and preceding tangent length were measured in the field with total station, whereas free-flow speed data was collected using radar gun at the mid of the horizontal curves. The statistical analysis concluded that the curve radius, curve length, degree of curvature and preceding tangent length are found significant on the operating speed of vehicle type Car, Two-wheeler, Three-wheeler, Light commercial vehicle and Heavy commercial vehicle and developed MLR model. Further, the data driven soft computing methods such as Artificial Neural Network (AN), Adaptive Neuro-fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) are applied to predict the operating speed of these vehicles and results were compared with MLR. The performance of the models evaluated using various goodness-of-fit measures indicates that the SVR model gives better results in prediction of operating speed in compared to other models. As for future research, further investigation could be conducted to explore uncertainties, and the model could be enhanced by utilizing other geometric and traffic parameters, and techniques like random forest, XGBoost etc. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Optimizing network lifetime in wireless sensor networks: a hierarchical fuzzy logic approach with LEACH integration.
- Author
-
Dadhirao, Chandrika, Reddy Sadi, Ram Prasad, Rao, Prabhakar, and Terlapu, Panduranga Vital
- Subjects
COMPUTER network protocols ,FUZZY logic ,TELECOMMUNICATION systems ,SOFT computing ,ENERGY consumption ,WIRELESS sensor networks - Abstract
Wireless sensor networks (WSNs) are of significant importance in many applications; nevertheless, their operational efficiency and longevity might be impeded by energy limitations. The low energy adaptive clustering hierarchy (LEACH) protocol has been specifically developed with the objective of achieving energy consumption equilibrium and regularly rotating cluster heads (CHs). This study presents a novel technique, namely the hierarchical fuzzy logic controller (HFLC), which is integrated with the LEACH protocol to enhance the process of CH selection and effectively prolong the network's operational lifespan. The HFLC system employs fuzzy logic as a means to address the challenges posed by uncertainty and imprecision. It assesses many aspects, including residual energy, node proximity, and network density, in order to make informed decisions. The combination of HFLC with LEACH demonstrates superior performance compared to the conventional LEACH protocol in terms of energy efficiency, stability, and network durability. This study emphasizes the potential of intelligent and adaptive mechanisms in improving the performance of WSNs by improving the survivability of nodes by reducing the energy consumption of the nodes during the communication of network process. It also paves the way for future research that integrates soft computing approaches into network protocols. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Predictive modeling of compressive strength in silica fume‐modified self‐compacted concrete: A soft computing approach.
- Author
-
Abdulrahman, Payam Ismael, Jaf, Dilshad Kakasor Ismael, Malla, Sirwan Khuthur, Mohammed, Ahmed Salih, Kurda, Rawaz, Asteris, Panagiotis G., and Sihag, Parveen
- Subjects
- *
MACHINE learning , *STANDARD deviations , *SILICA fume , *SOFT computing , *COMPRESSIVE strength , *SELF-consolidating concrete - Abstract
Self‐compacting concrete (SCC) is a specialized type of concrete that features excellent fresh properties, enabling it to flow uniformly and compact under its weight without vibration. SCC has been one of the most significant advancements in concrete technology over the past two decades. In efforts to reduce the environmental impact of cement production, a major source of CO2 emissions, silica fume (SF) is often used as a partial replacement for cement. SF‐modified SCC has become a common choice in construction. This study explores the effectiveness of soft computing models in predicting the compressive strength (CS) of SCC modified with varying amounts of silica fume. To achieve this, a comprehensive database was compiled from previous experimental studies, containing 240 data points related to CS. The compressive strength values in the database range from 21.1 to 106.6 MPa. The database includes seven independent variables: cement content (359.0–600.0 kg/m3), water‐to‐binder ratio (0.22–0.51), silica fume content (0.0–150.0 kg/m3), fine aggregate content (680.0–1166.0 kg/m3), coarse aggregate content (595.0–1000.0 kg/m3), superplasticizer content (1.5–15.0 kg/m3), and curing time (1–180 days). Four predictive models were developed based on this database: linear regression (LR), multi‐linear regression (MLR), full‐quadratic (FQ), and M5P‐tree models. The data were split, with two‐thirds used for training (160 data points) and one‐third for testing (80 data points). The performance of each model was evaluated using various statistical metrics, including the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), objective value (OBJ), scatter index (SI), and a‐20 index. The results revealed that the M5P‐tree model was the most accurate and reliable in predicting the compressive strength of SF‐based SCC across a wide range of strength values. Additionally, sensitivity analysis indicated that curing time had the most significant impact on the mixture's properties. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Dynamic Self‐Triggered Parallel Control of Guidance Intercept Systems With Constrained Inputs via Adaptive Dynamic Programming.
- Author
-
Liu, Lu, Song, Ruizhuo, and Lian, Bosen
- Subjects
- *
DYNAMIC programming , *OPTIMIZATION algorithms , *SOFT computing , *SMOOTHNESS of functions - Abstract
ABSTRACT This paper introduces an anti‐saturation optimization algorithm based on dynamic self‐triggered adaptive dynamic programming (ADP) for bounded acceleration guidance interception. By establishing a nonlinear input‐constrained guidance intercept and control system, the smooth bounded function is used to constrain the system input to ensure that the system operates in a controllable range. Subsequently, the appropriate performance function that can accurately reflect the system is designed. In alignment with the advantages of parallel control and ADP, a group of parallel systems are constructed by modeling the derivation of control input. A self‐learning control framework is explored, facilitating virtual‐actual interaction and mutual reinforcement between multiple controllers, optimizing the management of the interception system. Furthermore, event‐triggered control (ETC) policies are devised, which can be updated only when needed by setting trigger conditions, thus saving data resources. The stability proof of the closed‐loop system is given to ensure that the system can keep stable operation when the trigger condition is satisfied. On this basis, a dynamic self‐triggered control (DSTC) with soft computing is put forward, enabling trigger instant calculation without real‐time system monitoring and further extending the trigger interval. Simulation results evidence that the devised guidance scheme can intercept the maneuvering target at a reduced expenditure of communication resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Intelligent agricultural robotic detection system for greenhouse tomato leaf diseases using soft computing techniques and deep learning.
- Author
-
Mac, Thi Thoa, Nguyen, Tien-Duc, Dang, Hong-Ky, Nguyen, Duc-Toan, and Nguyen, Xuan-Thuan
- Subjects
- *
GENERATIVE adversarial networks , *AGRICULTURE , *SOFT computing , *DEEP learning , *PLANT classification , *TOMATOES - Abstract
The development of soft computing methods has had a significant influence on the subject of autonomous intelligent agriculture. This paper offers a system for autonomous greenhouse navigation that employs a fuzzy control algorithm and a deep learning-based disease classification model for tomato plants, identifying illnesses using photos of tomato leaves. The primary novelty in this study is the introduction of an upgraded Deep Convolutional Generative Adversarial Network (DCGAN) that creates augmented pictures of disease tomato leaves from original genuine samples, considerably enhancing the training dataset. To find the optimum training model, four deep learning networks (VGG19, Inception-v3, DenseNet-201, and ResNet-152) were carefully compared on a dataset of nine tomato leaf disease classes. These models have validation accuracy of 92.32%, 90.83%, 96.61%, and 97.07%, respectively, when using the original PlantVillage dataset. The system then uses an enhanced dataset with ResNet-152 network design to achieve a high accuracy of 99.69%, as compared to the original dataset with ResNet-152's accuracy of 97.07%. This improvement indicates the use of the proposed DCGAN in improving the performance of the deep learning model for greenhouse plant monitoring and disease detection. Furthermore, the proposed approach may have a broader use in various agricultural scenarios, potentially altering the field of autonomous intelligent agriculture. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Semi-Supervised Soft Computing for Ammonia Nitrogen Using a Self-Constructing Fuzzy Neural Network with an Active Learning Mechanism.
- Author
-
Zhou, Hongbiao, Huang, Yang, Yang, Dan, Chen, Lianghai, and Wang, Le
- Subjects
FUZZY neural networks ,STANDARD deviations ,SOFT computing ,K-means clustering ,WATER quality - Abstract
Ammonia nitrogen (NH
3 -N) is a key water quality variable that is difficult to measure in the water treatment process. Data-driven soft computing is one of the effective approaches to address this issue. Since the detection cost of NH3 -N is very expensive, a large number of NH3 -N values are missing in the collected water quality dataset, that is, a large number of unlabeled data are obtained. To enhance the prediction accuracy of NH3 -N, a semi-supervised soft computing method using a self-constructing fuzzy neural network with an active learning mechanism (SS-SCFNN-ALM) is proposed in this study. In the SS-SCFNN-ALM, firstly, to reduce the computational complexity of active learning, the kernel k-means clustering algorithm is utilized to cluster the labeled and unlabeled data, respectively. Then, the clusters with larger information values are selected from the unlabeled data using a distance metric criterion. Furthermore, to improve the quality of the selected samples, a Gaussian regression model is adopted to eliminate the redundant samples with large similarity from the selected clusters. Finally, the selected unlabeled samples are manually labeled, that is, the NH3 -N values are added into the dataset. To realize the semi-supervised soft computing of the NH3 -N concentration, the labeled dataset and the manually labeled samples are combined and sent to the developed SCFNN. The experimental results demonstrate that the test root mean square error (RMSE) and test accuracy of the proposed SS-SCFNN-ALM are 0.0638 and 86.31%, respectively, which are better than the SCFNN (without the active learning mechanism), MM, DFNN, SOFNN-HPS, and other comparison algorithms. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
14. Multiple objectives optimization of injection-moulding process for dashboard using soft computing and particle swarm optimization.
- Author
-
Moayyedian, Mehdi, Qazani, Mohammad Reza Chalak, Amirkhizi, Parisa Jourabchi, Asadi, Houshyar, and Hedayati-Dezfooli, Mohsen
- Subjects
- *
PARTICLE swarm optimization , *SOFT computing , *FACTORIAL experiment designs , *PLASTICS , *TIME pressure - Abstract
This research focuses on utilizing injection moulding to assess defects in plastic products, including sink marks, shrinkage, and warpages. Process parameters, such as pure cooling time, mould temperature, melt temperature, and pressure holding time, are carefully selected for investigation. A full factorial design of experiments is employed to identify optimal settings. These parameters significantly affect the physical and mechanical properties of the final product. Soft computing methods, such as finite element (FE), help mitigate behaviour by considering different input parameters. A CAD model of a dashboard component integrates into an FE simulation to quantify shrinkage, warpage, and sink marks. Four chosen parameters of the injection moulding machine undergo comprehensive experimental design. Decision tree, multilayer perceptron, long short-term memory, and gated recurrent units models are explored for injection moulding process modelling. The best model estimates defects. Multiple objectives particle swarm optimisation extracts optimal process parameters. The proposed method is implemented in MATLAB, providing 18 optimal solutions based on the extracted Pareto-Front. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Predicting grout's uniaxial compressive strength (UCS) for fully grouted rock bolting system by applying ensemble machine learning techniques.
- Author
-
Hosseini, Shahab, Entezam, Shima, Jodeiri Shokri, Behshad, Mirzaghorbanali, Ali, Nourizadeh, Hadi, Motallebiyan, Amin, Entezam, Alireza, McDougall, Kevin, Karunasena, Warna, and Aziz, Naj
- Subjects
- *
ROCK bolts , *FLY ash , *COMPRESSIVE strength , *SOFT computing , *REGRESSION trees - Abstract
This study proposes a novel system for accurately predicting grout's uniaxial compressive strength (UCS) in fully grouted rock bolting systems. To achieve this, a database comprising 73 UCS values with varying water-to-grout (W/G) ratios ranging from 22 to 42%, curing times from 1 to 28 days, the admixture of fly ash contents ranging from 0 to 30%, and two Australian commercial grouts, Stratabinder HS, and BU-100, was built after conducting comprehensive series of experimental tests. After building the dataset, a metaheuristic technique, the jellyfish search (JS) algorithm was employed to determine the weight of base models in the ensemble system. This system combined various data and modelling techniques to enhance the accuracy of the UCS predictions. What sets this technique apart is the comprehensive database and the innovative use of the JS algorithm to create a weighted averaging ensemble model, going beyond traditional methods for predicting grout strength. The proposed ensemble model was called the weighted averaging ensemble model (WAE-JS), in which the obtained results of several soft computing models such as multi-layer perceptron (MLP), Bayesian regularized (BR) neural networks, generalized feed-forward (GFF) neural networks, classification and regression tree (CART), and random forest (RF) were weighted based on JS and the new results were then generated. Eventually, the result of WAE-JS was compared to other models, including MLP, BR, GFF, CART, and RF, based on some statistical parameters, such as R-squared coefficients, RMSE, and VAF as indices for evaluating the performance and capability of the proposed model. The results suggested the superiority of the ensemble WAE-JS system over the base models. In addition, the proposed WAE-JS model effectively improved the predicting accuracy achieved from the MLP, BR, GFF, CART, and RF. Furthermore, the sensitivity analysis revealed that the W/G had the most significant impact on the grout's UCS values. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A scientometrics review of conventional and soft computing methods in the slope stability analysis.
- Author
-
Ahmad, Feezan, Tang, Xiao-Wei, Ahmad, Mahmood, Najeh, Taoufik, Gamil, Yaser, Ebid, Ahmed M., and Song, Jinhu
- Subjects
ARTIFICIAL neural networks ,SLOPE stability ,KRIGING ,SOFT computing ,SUPPORT vector machines - Abstract
Predicting slope stability is important for preventing and mitigating landslide disasters. This paper examines the existing approaches for analyzing slope stability. There are several established conventional approaches for slope stability analysis that can be applied in this context. However, in recent decades, soft computing methods has been extensively developed and employed in stochastic slope stability analysis, notably as surrogate models to improve computing efficiency in contrast to traditional approaches. Soft computing methods can deal with uncertainty and imprecision, which may be quantified using performance indices like coefficient of determination, in regression and accuracy in classification. This review study focuses on conventional methods such as the Bishop's method and Janbu's method, as well as soft computing models such as support vector machine, artificial neural network, Gaussian process regression, decision tree, etc. The advantages and limitations of soft computing techniques in relation to conventional methods have also been thoroughly covered in this paper. The achievements of soft computing methods are summarized from two aspects -predicting factor of safety and classification of slope stability. Key potential research challenges and future prospects are also given. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. LSTM Gate Disclosure as an Embedded AI Methodology for Wearable Fall-Detection Sensors †.
- Author
-
Correia, Sérgio D., Roque, Pedro M., and Matos-Carvalho, João P.
- Subjects
- *
ARTIFICIAL intelligence , *WIRELESS sensor networks , *SOFT computing , *WEARABLE technology , *OLDER people - Abstract
In this paper, the concept of symmetry is used to design the efficient inference of a fall-detection algorithm for elderly people on embedded processors—that is, there is a symmetric relation between the model's structure and the memory footprint on the embedded processor. Artificial intelligence (AI) and, more particularly, Long Short-Term Memory (LSTM) neural networks are commonly used in the detection of falls in the elderly population based on acceleration measures. Nevertheless, embedded systems that may be utilized on wearable or wireless sensor networks have a hurdle due to the customarily massive dimensions of those networks. Because of this, the algorithms' most popular implementation relies on edge or cloud computing, which raises privacy concerns and presents challenges since a lot of data need to be sent via a communication channel. The current work proposes a memory occupancy model for LSTM-type networks to pave the way to more efficient embedded implementations. Also, it offers a sensitivity analysis of the network hyper-parameters through a grid search procedure to refine the LSTM topology network under scrutiny. Lastly, it proposes a new methodology that acts over the quantization granularity for the embedded AI implementation on wearable devices. The extensive simulation results demonstrate the effectiveness and feasibility of the proposed methodology. For the embedded implementation of the LSTM for the fall-detection problem on a wearable platform, one can see that an STM8L low-power processor could support a 40-hidden-cell LSTM network with an accuracy of 96.52 % . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Evaluating the impact of eccentric loading on strip footing above horseshoe tunnels in rock mass using adaptive finite element limit analysis and machine learning.
- Author
-
Kumar, Aayush and Chauhan, Vinay Bhushan
- Subjects
- *
ARTIFICIAL neural networks , *FAILURE mode & effects analysis , *FINITE element method , *SOFT computing , *IMPACT loads , *TUNNEL ventilation - Abstract
The present study investigates the ultimate bearing capacity (UBC) of a footing subjected to an eccentric load situated above an unlined horseshoe-shaped tunnel in the rock mass, following the Generalized Hoek-Brown (GHB) failure criterion. A reduction factor (Rf) is introduced to investigate the impact of the tunnel on the UBC of the footing. Rf is determined using upper and lower bound analyses with adaptive finite-element limit analysis. The study examines the influence of several independent variables, including normalized load eccentricity (e/B), normalized vertical and horizontal distances (δ/B and H/B) of the footing from the tunnel, tunnel size (W/B), and other rock mass parameters. It was found that all these parameters significantly affect the behavior of tunnel-footing interaction depending on the range of varying parameters. The findings of the study indicate that the critical depth (when Rf is nearly 1) of the tunnel decreases with increasing load eccentricity. The critical depth is found to be δ/B ≥ 2 for e/B ≤ 0.2 and δ/B ≥ 1.5 for e/B ≥ 0.3, regardless of H/B ratios. Additionally, the GHB parameters of the rock mass significantly influence the interaction between the tunnel and the footing. Moreover, this study identifies some typical potential failure modes depending on the tunnel position. The typical potential failure modes of the footing include punching failure, cylindrical shear wedge failure, and Prandtl-type failure. This study also incorporates soft computing techniques and formulates empirical equations to predict Rf using artificial neural networks (ANNs) and multiple linear regression (MLR). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Novel intelligent Bayesian computing networks for predictive solutions of nonlinear multi-delayed tumor oncolytic virotherapy systems.
- Author
-
Anwar, Nabeela, Ahmad, Iftikhar, Kiani, Adiqa Kausar, Shoaib, Muhammad, and Raja, Muhammad Asif Zahoor
- Subjects
- *
ONCOLYTIC virotherapy , *BAYESIAN analysis , *CYTOTOXIC T cells , *REGRESSION analysis , *THERAPEUTICS , *SOFT computing - Abstract
Oncolytic viral immunotherapy is gaining considerable prominence in the realm of chronic diseases treatment and rehabilitation. Oncolytic viral therapy is an intriguing therapeutic approach due to its low toxicity and dual function of immune stimulations. This work aims to design a soft computing approach using stupendous knacks of neural networks (NNs) optimized with Bayesian regularization (BR), i.e. NNs-BR, procedure. The constructed NNs-BR technique is exploited in order to determine the approximate numerical treatment of the nonlinear multi-delayed tumor virotherapy (TVT) models in terms of the dynamic interactions between the tumor cells free of viruses, tumor cells infected by viruses, viruses, and cytotoxic T-lymphocytes (CTLs). The strength of state-of-the-art numerical approach is incorporated to develop the reference dataset for the variation in the infection rate for tumor cells, virus-free tumor cell clearance rate by CTLs, CTLs clearance rate for infectious tumor cells, the natural lifecycle of infectious tumor cells, the natural lifecycle of viral cell, the natural lifecycle of CTLs cells, tumor cells free of viruses' maximum proliferation rate, production of tumor cells with an infection, CTLs simulated ratio for infectious tumor cells, CTLs simulated ratios for virus-free cells and delay in time. The dataset is randomly chosen/segmented for training-testing-validation samples to construct the NNs models optimized with backpropagated BR representing the approximate numerical solutions of the dynamic interactions in the TVT model. The performance of the designed NNs-BR technique is accessed/evaluated and outcomes are found in good agreement with the reference solutions having the range of accuracy from 10 − 9 to 10 − 1 6 . The efficacy of NNs-BR paradigm is further substantiated after rigorous analysis on regression metrics, learning curves on MSE, and error histograms for the dynamics of TVT model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Advanced Soft Computing Ensemble for Modeling Contaminant Transport in River Systems: A Comparative Analysis and Ecological Impact Assessment.
- Author
-
Chabokpour, Jafar
- Subjects
SOFT computing ,WATERSHEDS ,ARTIFICIAL neural networks ,ENVIRONMENTAL protection - Abstract
The paper applies soft computing techniques to contaminant transport modeling in river systems and focuses on the Monocacy River. The research employed various techniques, including Artificial Neural Networks (ANN), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), Support Vector Regression (SVR), and Genetic Algorithms (GA), to predict pollutant concentrations and estimate transport parameters. The ANN, particularly the Long Short-Term Memory architecture, had more superior performance: the lowest RMSE of 0.37, and the highest R-squared was 0.958. The RMSE obtained by the ANFIS model was 0.40, with an R-squared value of 0.945. It provided a balance with accuracy and interpretability. SVR performance with RBF kernel was robust; it has attained an RMSE of 0.42 and R-squared of 0.940, along with very fast training times. The flow velocities and the longitudinal dispersion coefficients at different reaches were estimated to be in the range of 0.30 to 0.42 m/s for average flow velocity and 0.18 to 0.31 m²/s for the longitudinal dispersion coefficient. In addition, the potentially affected fraction of species due to peak concentrations was used to reflect the assessment of ecological impact, which had values ranging from 0.07 to 0.35. For the time-varying estimation, there is supposed to be a variation in the dispersion coefficient and the decay rate over 48 hours, from 0.75 to 0.89 m²/s and from 0.10 to 0.13 day
-1 , respectively. The research demonstrates the potential of soft computing approaches for modeling complex pollutant dynamics and further provides valuable insights into river management and environmental protection strategies. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
21. A Prediction of Manning's nr in Compound Channels with Converged and Diverged Floodplains using GMDH Model.
- Author
-
Bijanvand, Sajad, Mohammadi, Mirali, and Parsaie, Abbas
- Subjects
WATERWAYS ,FLOODPLAINS ,SEDIMENTATION & deposition ,FLOODS ,SOFT computing - Abstract
In the study of natural waterways, the use of formulas such as Manning's equation is prevalent for analyzing flow structure characteristics. Typically, floodplains exhibit greater roughness compared to the main river channel, which results in higher flow velocities within the main channel. This difference in velocity can lead to increased sedimentation potential within the floodplains. Therefore, accurately determining Manning's roughness coefficient for compound channels, particularly during flood events, is of significant interest to researchers. This study aims to model the Manning roughness coefficient in compound channels with both converging and diverging floodplains using advanced soft computing techniques. These techniques include a multi-layer artificial neural network (MLPNN), Group Method of Data Handling (GMDH), and the Neuro-Fuzzy Group Method of Data Handling (NF-GMDH). For the analysis, a dataset from 196 laboratory experiments was used, which was divided into training and testing subsets. Input variables included parameters such as longitudinal slope (S
o ), relative hydraulic radius (Rr ), relative depth (Dr ), relative dimension of flow aspects (δ*), and the convergent or divergent angle (θ) of the floodplain. The relative Manning roughness coefficient (nr ) was the output variable of interest. The results of the study showed that all the models performed well, with the MLPNN model achieving the highest accuracy, characterized by R² = 0.99, RMSE = 0.001, SI = 0.0015, and DDR = 0.0233 during the testing phase. Further analysis of the soft computing models indicated that the most critical parameters influencing the results were Sr , Rr ,Dr *, and θ. These findings highlight the effectiveness of soft computing techniques in accurately modeling the Manning roughness coefficient in complex channel conditions and provide valuable insights for future research and practical applications in the management of flood events and waterway analysis. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
22. Evaluating Volatility Using an ANFIS Model for Financial Time Series Prediction.
- Author
-
Orozco-Castañeda, Johanna M., Alzate-Vargas, Sebastián, and Bedoya-Valencia, Danilo
- Subjects
BOX-Jenkins forecasting ,TIME series analysis ,FUZZY systems ,MATHEMATICAL optimization ,PRICES - Abstract
This paper develops and implements an Autoregressive Integrated Moving Average model with an Adaptive Neuro-Fuzzy Inference System (ARIMA-ANFIS) for BTCUSD price prediction and risk assessment. The goal of these forecasts is to identify patterns from past data and achieve an understanding of the future behavior of the price and its volatility. The proposed ARIMA-ANFIS model is compared with a benchmark ARIMA-GARCH model. To evaluated the adequacy of the models in terms of risk assessment, we compare the confidence intervals of the price and accuracy measures for the testing sample. Additionally, we implement the diebold and Mariano test to compare the accuracy of the two volatility forecasts. The results revealed that each volatility model focuses on different aspects of the data dynamics. The ANFIS model, while effective in certain scenarios, may expose one to unexpected risks due to its underestimation of volatility during turbulent periods. On the other hand, the GARCH(1,1) model, by producing higher volatility estimates, may lead to excessive caution, potentially reducing returns. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Passive earth pressure on vertical rigid walls with negative wall friction coupling statically admissible stress field and soft computing.
- Author
-
Shiau, Jim, Nguyen, Tan, and Bui-Ngoc, Tram
- Subjects
- *
EARTH pressure , *SOFT computing , *RADIAL stresses , *NUMERICAL integration - Abstract
It is well known that the roughness of a wall plays a crucial role in determining the passive earth pressure that is exerted on a rigid wall. While the effects of positive wall roughness have been extensively studied in the past few decades, the study of passive earth pressure with negative wall friction is rarely found in the literature. This study aims to provide a precise solution for negative friction walls under passive wall conditions. The research is initiated by adopting a radial stress field for the cohesionless backfill and employs the concept of stress self-similarity. The problem is then formulated in a way that a statically admissible stress field be developed throughout an analyzed domain using a two-step numerical framework. The framework involves the successful execution of numerical integration, which leads to the exploration of the statically admissible stress field in cohesionless backfills under negative wall friction. This, in turn, helps to shed light on the mechanism of load transfer in such situations so that reliable design charts and tables be provided for practical uses. The study continues with a soft computing model that leads to more robust and effective designs for earth-retaining structures under various negative wall frictions and sloping backfills. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. An O-vanillin scaffold as a selective chemosensor of PO43− and the application of neural network based soft computing to predict machine learning outcomes.
- Author
-
Mudi, Naren, Samanta, Shashanka Shekhar, Mandal, Sourav, Barman, Suraj, Beg, Hasibul, and Misra, Ajay
- Subjects
- *
SOFT computing , *EDUCATIONAL outcomes , *ARTIFICIAL neural networks , *SCHIFF bases , *LOGIC circuits , *MACHINE learning , *OCHRATOXINS , *VOLTAGE-controlled oscillators - Abstract
O-Vanillin derived Schiff base 1-[(E)-(2-hydroxy-3-methoxybenzylidene) amino]-4-methylthiosemicarbazone (VCOH) has been synthesized for colorimetric and fluorescence chemosensors towards PO43− ions. A fluorescence 'turn-on' sensing mechanism of VCOH towards PO43− ions has been explained due to emission from the VCO− ion formed upon transfer of the phenolic proton of VCOH to a PO43− ion. The 1 : 1 stoichiometry between the VCOH probe and PO43− ion is confirmed by Job's plot based on UV-vis titration. The limit of detection (LOD) of VCOH towards PO43− ions is found to be 0.49 nM. The PO43− ion sensing property of probe VCOH has been applied to prepare portable paper strips and for the analysis of real water samples. Fluorescence 'turn-on' and 'turn-off' responses of VCOH towards PO43−and H+ respectively have been used to construct a molecular logic gate. Fluorescence based sensing studies in which the concentration of analytes is adjusted over a broad range can be both laborious and expensive. In order to address these challenges, we have utilized various soft computing methods, including artificial neural networks (ANN), fuzzy logic (FL), and adaptive neuro-fuzzy inference systems (ANFIS), to appropriately model the 'turn-on' and 'turn-off' behaviors of the VCOH probe upon addition of PO43− and H+ respectively as well as to predict the experimental sensing data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Deep learning framework for stock price prediction using long short-term memory.
- Author
-
Chandar, S. Kumar
- Subjects
- *
STOCK price forecasting , *STOCK prices , *INVESTORS , *ARTIFICIAL intelligence , *SOFT computing - Abstract
Forecasting stock prices is always considered as complicated process due to the dynamic and noisy characteristics of stock data influenced by external factors. For predicting the stock market, several approaches have been put forward. Many academics have successfully forecasted stock prices using soft computing models. Recently, there has been growing interest in applying deep learning techniques in combination with technical indicators to forecast stock prices, attracting attention from both investors and researchers. This paper focuses on developing a reliable model for anticipating future stock prices in one day advance using Long Short-Term Memory (LSTM). Three steps make up the suggested model. The approach begins with ten technical indicators computed from previous data as feature vectors. The second phase involves data normalization to scale the feature vectors. Finally, in the third phase, the LSTM model analyzes the closing price for the next day using the normalized characteristics as input. Two stock markets, NASDAQ and NYSE are chosen to evaluate the efficacy of the developed model. To demonstrate how effective the new model is in making predictions, its performance is compared to earlier models. Comparing the suggested model to other models, the findings revealed that it had a high level of prediction accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Predicting axial-bearing capacity of fully grouted rock bolting systems by applying an ensemble system.
- Author
-
Hosseini, Shahab, Jodeiri Shokri, Behshad, Mirzaghorbanali, Ali, Nourizadeh, Hadi, Entezam, Shima, Motallebiyan, Amin, Entezam, Alireza, McDougall, Kevin, Karunasena, Warna, and Aziz, Naj
- Subjects
- *
ROCK bolts , *PEAK load , *ARTIFICIAL intelligence , *SOFT computing , *SENSITIVITY analysis - Abstract
In this paper, the potential of the five latest artificial intelligence (AI) predictive techniques, namely multiple linear regression (MLR), multi-layer perceptron neural network (MLPNN), Bayesian regularized neural network (BRNN), generalized feed-forward neural networks (GFFNN), extreme gradient boosting (XGBoost), and their ensemble soft computing models were evaluated to predict of the maximum peak load (PL) and displacement (DP) values resulting from pull-out tests. For this, 34 samples of the fully cementitious grouted rock bolts were prepared and cast. After conducting pull-out tests and building a dataset, twenty-four tests were randomly considered as a training dataset, and the remaining measurements were chosen to test the models' performance. The input parameters were water-to-grout ratio (%) and curing time (day), while peak loads and displacement values were the outputs. The results revealed that the ensemble XGBoost model was superior to the other models. It was because having higher values of R2 (0.989, 0.979) and VAF (99.473, 98.658) and lower values of RMSE (0.0201, 0.0435) were achieved for testing the dataset of PL and DP' values, respectively. Besides, sensitivity analysis proved that curing time was the most influential parameter in estimating values of peak loads and displacements. Also, the results confirmed that the ensemble XGBoost method was positioned to predict the axial-bearing capacity of the fully cementitious grouted rock bolting system with extreme performance and accuracy. Eventually, the results of the ensemble XGBoost modeling technique suggested that this novel model was more economical, less time-consuming, and less complicated than laboratory activities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Reliability‐based state parameter liquefaction probability prediction using soft computing techniques.
- Author
-
Kumar, Kishan, Samui, Pijush, and Choudhary, S. S.
- Subjects
- *
CONE penetration tests , *KRIGING , *SOFT computing , *ENGINEERING design , *CYCLIC loads - Abstract
The state parameter (ѱ) accounts for both relative density and effective stress, which influence the cyclic stress or liquefaction characteristic of the soil significantly. This study presents a ѱ‐based probabilistic liquefaction evaluation method using six soft computing (SC) techniques. The liquefaction probability of failure (PL) is calculated using the first‐order second moment (FOSM) method based on the cone penetration test (CPT) database. Then, six SC techniques, such as Gaussian process regression (GPR), relevance vector machine (RVM), functional network (FN), genetic programming (GP), minimax probability machine regression (MPMR) and multivariate adaptive regression splines (MARS), are used to predict PL. The performance of these models is examined using nine statistical indices. Additionally, plots such as regression plots, Taylor diagrams, error matrix and rank analysis are shown to assess the SC model's performance. Finally, sensitivity analysis is performed using the cosine amplitude method (CAM) to assess the influence of input parameters on output. The current study demonstrates that SC models based on state parameter predict PL effectively. RVM and MPMR models closely follow the GPR model in terms of performance, which is superior to the other models. Notably, two equations are generated using GP and MARS models to predict PL. The results of the sensitivity analysis reveal the magnitude of earthquake (Mw) as the most sensitive parameter. The outcomes of this research will offer risk evaluations for geotechnical engineering designs and expand the use of state parameter‐based SC models in liquefaction analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Stochastic debugging based reliability growth models for Open Source Software project.
- Author
-
Singhal, Shakshi, Kapur, P. K., Kumar, Vivek, and Panwar, Saurabh
- Subjects
- *
OPEN source software , *SOFT computing , *DISTRIBUTION (Probability theory) , *SOFTWARE reliability , *COMPUTER software development - Abstract
Open Source Software (OSS) is one of the most trusted technologies for implementing industry 4.0 solutions. The study aims to assist a community of OSS developers in quantifying the product's reliability. This research proposes reliability growth models for OSS by incorporating dynamicity in the debugging process. For this, stochastic differential equation-based analytical models are developed to represent the instantaneous rate of error generation. The fault introduction rate is modeled using exponential and Erlang distribution functions. The empirical applications of the proposed methodology are verified using the real-life failure data of the Open Source Software projects, GNU Network Object Model Environment, and Eclipse. A soft computing technique, Genetic Algorithm, is applied to estimate model parameters. Cross-validation is also performed to examine the forecasting efficacy of the model. The predictive power of the developed models is compared with various benchmark studies. The data analysis is conducted using the R statistical computing software. The results demonstrate the proposed models' efficacy in parameter estimation and predictive performance. In addition, the optimal release time policy based on the proposed mathematical models is presented by formulating the optimization model that intends to minimize the total cost of software development under reliability constraints. The numerical illustration and sensitivity analysis exhibit the proposed problem's practical significance. The findings of the numerical analysis exemplify the proposed study's capability of decision-making under uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Rainfall-runoff modelling – a comparison of Artificial Neural Networks (ANNs) and Hydrologic Engineering Centre-Hydrologic Modelling System (HEC-HMS).
- Author
-
Deulkar, Aparna M., Londhe, Shreenivas N., Jain, Rakesh K., and Dixit, Pradnya R.
- Subjects
ARTIFICIAL neural networks ,STANDARD deviations ,ENGINEERING models ,SOFT computing ,HYDROLOGIC models - Abstract
This article presents comparison of Artificial Neural Networks (ANNs) and Hydrologic Engineering Centre-Hydrologic Modelling System (HEC-HMS) model for rainfall-runoff (R-R) process. Aim of the present work is to forecast runoff one day ahead at Shivade station of Upper Krishna Basin, India, using 17 years of daily rainfall and discharge data. The R-R modelling can be exercised using various traditional methods which generally require exogenous data in the form of basin parameters. Unavailability of such data becomes major impediment in applying these models at many basins. In such situations, soft computing techniques like ANNs have been extensively applied to model R-R process. Though ANN is now an established tool in hydrology, compared to HEC-HMS, its results are viewed with suspicion owing to its data-driven nature rather than a model-driven nature. In this study, ANN model performed reasonably well, with a higher correlation coefficient (0.87) and the lowest Root Mean Square Error (136.28 m
3 /s) when compared with HEC-HMS (0.76, 139.8 m3 /s) respectively. Novelty of the present work lies in model development using restricted basin data. Both models showed less accuracy in predicting extreme events. Finally, it is concluded that ANN model can be used as a supplementary technique along with HEC-HMS for this phenomenon. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
30. Applications of Soft Computing Methods in Backbreak Assessment in Surface Mines: A Comprehensive Review.
- Author
-
Yari, Mojtaba, Khandelwal, Manoj, Abbasi, Payam, Koutras, Evangelos I., Armaghani, Danial Jahed, and Asteris, Panagiotis G.
- Subjects
MINING engineering ,SOFT sets ,CIVIL engineering ,COMPUTERS ,SOFT computing - Abstract
Geo-engineering problems are known for their complexity and high uncertainty levels, requiring precise definitions, past experiences, logical reasoning, mathematical analysis, and practical insight to address them effectively. Soft Computing (SC) methods have gained popularity in engineering disciplines such as mining and civil engineering due to computer hardware and machine learning advancements. Unlike traditional hard computing approaches, SC models use soft values and fuzzy sets to navigate uncertain environments. This study focuses on the application of SC methods to predict backbreak, a common issue in blasting operations within mining and civil projects. Backbreak, which refers to the unintended fracturing of rock beyond the desired blast perimeter, can significantly impact project timelines and costs. This study aims to explore how SC methods can be effectively employed to anticipate and mitigate the undesirable consequences of blasting operations, specifically focusing on backbreak prediction. The research explores the complexities of backbreak prediction and highlights the potential benefits of utilizing SC methods to address this challenging issue in geo-engineering projects. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Optimizing Injection Molding for Propellers with Soft Computing, Fuzzy Evaluation, and Taguchi Method
- Author
-
M. Hedayati-Dezfooli, Mehdi Moayyedian, Ali Dinc, Mostafa Abdrabboh, Ahmed Saber, and A. M. Amer
- Subjects
injection molding ,shrinkage ,sink mark ,soft computing ,fahp ,topsis ,taguchi. ,Technology (General) ,T1-995 ,Social sciences (General) ,H1-99 - Abstract
This research explores multi-objective optimization in injection molding with a focus on identifying the optimal configuration for the moldability index in aviation propeller manufacturing. The study employs the Taguchi method and fuzzy analytic hierarchy process (FAHP) combined with the Technique for the Order Performance by Similarity to the Ideal Solution (TOPSIS) to systematically evaluate diverse objectives. The investigation specifically addresses two prevalent defects—shrinkage rate and sink mark—that impact the final quality of injection-molded components. Polypropylene is chosen as the injection material, and critical process parameters encompass melt temperature, mold temperature, filling time, cooling time, and pressure holding time. The Taguchi L25 orthogonal array is selected, considering the number of levels and parameters, and Finite Element Analysis (FEA) is applied to enhance precision in results. To validate both simulation outcomes and the proposed optimization methodology, Artificial Neural Network (ANN) analysis is conducted for the chosen component. The Fuzzy-TOPSIS method, in conjunction with ANN, is employed to ascertain the optimal levels of the selected parameters. The margin of error between the chosen optimization methods is found to be less than one percent, underscoring their suitability for injection molding optimization. The efficacy of the selected optimization method has been corroborated in prior research. Ultimately, employing the fuzzy-TOPSIS optimization method yields a minimum shrinkage value of 16.34% and a sink mark value of 0.0516 mm. Similarly, utilizing the ANN optimization method results in minimum values of 16.42% for shrinkage and 0.0519 mm for the sink mark. Doi: 10.28991/ESJ-2024-08-05-025 Full Text: PDF
- Published
- 2024
- Full Text
- View/download PDF
32. Multiple objectives optimization of injection-moulding process for dashboard using soft computing and particle swarm optimization
- Author
-
Mehdi Moayyedian, Mohammad Reza Chalak Qazani, Parisa Jourabchi Amirkhizi, Houshyar Asadi, and Mohsen Hedayati-Dezfooli
- Subjects
Injection moulding ,Warpage/shrinkage/sink mark ,Soft computing ,Multiple objectives particle swarm optimisation ,Pareto front ,Medicine ,Science - Abstract
Abstract This research focuses on utilizing injection moulding to assess defects in plastic products, including sink marks, shrinkage, and warpages. Process parameters, such as pure cooling time, mould temperature, melt temperature, and pressure holding time, are carefully selected for investigation. A full factorial design of experiments is employed to identify optimal settings. These parameters significantly affect the physical and mechanical properties of the final product. Soft computing methods, such as finite element (FE), help mitigate behaviour by considering different input parameters. A CAD model of a dashboard component integrates into an FE simulation to quantify shrinkage, warpage, and sink marks. Four chosen parameters of the injection moulding machine undergo comprehensive experimental design. Decision tree, multilayer perceptron, long short-term memory, and gated recurrent units models are explored for injection moulding process modelling. The best model estimates defects. Multiple objectives particle swarm optimisation extracts optimal process parameters. The proposed method is implemented in MATLAB, providing 18 optimal solutions based on the extracted Pareto-Front.
- Published
- 2024
- Full Text
- View/download PDF
33. Intelligent agricultural robotic detection system for greenhouse tomato leaf diseases using soft computing techniques and deep learning
- Author
-
Thi Thoa Mac, Tien-Duc Nguyen, Hong-Ky Dang, Duc-Toan Nguyen, and Xuan-Thuan Nguyen
- Subjects
Soft computing ,Fuzzy control ,Tomato plant disease classification ,DCGAN ,Precision agriculture ,Medicine ,Science - Abstract
Abstract The development of soft computing methods has had a significant influence on the subject of autonomous intelligent agriculture. This paper offers a system for autonomous greenhouse navigation that employs a fuzzy control algorithm and a deep learning-based disease classification model for tomato plants, identifying illnesses using photos of tomato leaves. The primary novelty in this study is the introduction of an upgraded Deep Convolutional Generative Adversarial Network (DCGAN) that creates augmented pictures of disease tomato leaves from original genuine samples, considerably enhancing the training dataset. To find the optimum training model, four deep learning networks (VGG19, Inception-v3, DenseNet-201, and ResNet-152) were carefully compared on a dataset of nine tomato leaf disease classes. These models have validation accuracy of 92.32%, 90.83%, 96.61%, and 97.07%, respectively, when using the original PlantVillage dataset. The system then uses an enhanced dataset with ResNet-152 network design to achieve a high accuracy of 99.69%, as compared to the original dataset with ResNet-152’s accuracy of 97.07%. This improvement indicates the use of the proposed DCGAN in improving the performance of the deep learning model for greenhouse plant monitoring and disease detection. Furthermore, the proposed approach may have a broader use in various agricultural scenarios, potentially altering the field of autonomous intelligent agriculture.
- Published
- 2024
- Full Text
- View/download PDF
34. Passive earth pressure on vertical rigid walls with negative wall friction coupling statically admissible stress field and soft computing
- Author
-
Jim Shiau, Tan Nguyen, and Tram Bui-Ngoc
- Subjects
Negative wall friction ,Admissible stress field ,Passive earth pressure ,Soft computing ,Medicine ,Science - Abstract
Abstract It is well known that the roughness of a wall plays a crucial role in determining the passive earth pressure that is exerted on a rigid wall. While the effects of positive wall roughness have been extensively studied in the past few decades, the study of passive earth pressure with negative wall friction is rarely found in the literature. This study aims to provide a precise solution for negative friction walls under passive wall conditions. The research is initiated by adopting a radial stress field for the cohesionless backfill and employs the concept of stress self-similarity. The problem is then formulated in a way that a statically admissible stress field be developed throughout an analyzed domain using a two-step numerical framework. The framework involves the successful execution of numerical integration, which leads to the exploration of the statically admissible stress field in cohesionless backfills under negative wall friction. This, in turn, helps to shed light on the mechanism of load transfer in such situations so that reliable design charts and tables be provided for practical uses. The study continues with a soft computing model that leads to more robust and effective designs for earth-retaining structures under various negative wall frictions and sloping backfills.
- Published
- 2024
- Full Text
- View/download PDF
35. Artificial neural network-based fault classification in nine-level inverters.
- Author
-
Kavitha, N., Roseline, J. F., and Yong, L. C.
- Subjects
- *
PRINCIPAL components analysis , *SOFT computing , *CLASSIFICATION - Abstract
This paper focuses on fault classification in cascaded H bridge nine-level inverter (CHBMLI). Comparative fault analysis and fault classification are presented. Primitive methods of fault classification like wavelet and principal component analysis were used to classify faults in multi-level inverters. Wavelets were widely used for fault classification as it gives the details of time and frequency information simultaneously. With the advent of neural networks and other soft computing techniques, multilevel inverter fault analysis has become much simpler. Healthy output is compared with faulty signals and this is fed to the neural network. A Neural network trains the signals fed into the network. As a result, the performances of faulty and healthy signals are studied, and results are presented using MATLAB/SIMULINK. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Deep learning based soft computing technique for intelligent attendance management system.
- Author
-
Sriram, K. K., Sivakumar, V., Sathieshkumar, P., Maheswari, P. Uma, and Roomi, S. Mohamed Mansoor
- Subjects
- *
SOFT computing , *DATA augmentation , *ELECTRONIC paper , *FEATURE extraction , *DEEP learning , *ATTENDANCE - Abstract
The use of facial recognition method has spread to many different industries, most notably smart attendance systems. But one major obstacle these systems must overcome is spoofing, which is the practice of people trying to trick the system by utilizing counterfeit or modified photos or videos. In the context of a smart attendance framework, this paper presents a smart anti-spoofing mechanism that can distinguish amongst real and fake faces with accuracy. The system that has been developed entails the generation of datasets that contain both actual and fake facial photos and videos, in addition to the application of a feature extraction and classification pipeline that is driven by deep learning. Furthermore, a number of data augmentation techniques, including translation, scaling, and rotation, are used to improve the effectiveness of the suggested system against different spoofing strategies. Empirical results show that these strategies support increased accuracy and durability in thwarting various spoofing attack types. With a 97% recognition rate overall, our anti-spoofing/mocking technology affords a stable and secure foundation for intelligent attendance systems. The study emphasizes how effective it is to create a strong anti-spoofing system specifically for keen attendance applications by integrating deep learning and soft computing models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Federated fusion technique for data driven decision making in intelligent transportation systems –[ITS].
- Author
-
Sheefa, Y., Amutha, B., and Muthamilselvan, Dhivyadarsan Gomathi
- Subjects
- *
VIDEO summarization , *INTELLIGENT transportation systems , *MULTISENSOR data fusion , *DECISION making , *SOFT computing , *VIDEO surveillance - Abstract
Intelligent Transportation Systems (ITS) have emerged as a key solution for addressing the challenges associated with modern transportation systems. The primary goal of ITS is to improve the efficiency, safety, and sustainability of transportation through the use of advanced technologies and data analytics. Video summarization, content extraction, autonomous video surveillance systems, and others have arisen due to the boom in video production. This approach is often used to track moving objects in videos. YOLO and FL Algorithms have been used to create low-cost, robust real-world solutions. These analyses are organised into detection and tracking categories across all modalities. These methods' pros and cons are listed. A tabular analysis from multiple perspectives was also created. Soft computing application instructions are supplied. The list includes datasets for soft computing testing and evaluation. YOLO and FL are used to develop a new object recognition and tracking system. The revised CDM-YOLO technique lets us find the best threshold value for separating foreground and background pixels. The approach models background and contours well. Our representative dataset tests of the suggested technique yield qualitative and quantitative results. Comparing the proposed strategy to current practises strengthens its case. Standard movies have superior precision, recall, and accuracy than state-of-the-art approaches. Using IFTTT to send message alert to a account holder. The results suggest that the recommended technique is better at solving many complex situations. The key parameter values are far better than other ways. The adaptive algorithm generates new values for the governing parameters, such as the learning rate and threshold, which improves classification accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Application of soft computing techniques and software defined networks for detection of fraudulent resource consumption attacks: A comprehensive review.
- Author
-
Shinde, Amar and Bhingarkar, Sukhada
- Subjects
- *
SOFTWARE-defined networking , *SOFT computing , *CLOUD computing , *ENERGY consumption , *MACHINE learning - Abstract
With most businesses increasing its dependency on the internet, it has become largely evident that several resources will be required to be either constructed or hired to sustain the growing needs of their businesses. Here comes the main requirement of usage of hired services like cloud computing which can be hired by many businesses for their continuous growth and sustenance. With the growing number of inventions and discoveries in the field of cyber-security, large number of attacks are also possible on the cyber assets. Cloud computing architecture is also no exception and is vulnerable with different breeds of attacks which threaten the utility of the cloud. Cloud computing architecture and the services it provides on cloud mainly use the feature of pay as you use with many of the services being auto scaled to a larger extent thereby required to maintain the flexibility for any growing business with a larger customer base. There are various fraudulent threats related to the energy utilization that attempt to exploit the elasticity of the cloud and the multi-tenant model it provides. In addition, with the rapid development in technologies, the networking systems are becoming complex which requires a detailed study on the various detection and mitigation techniques that are available against the different breeds of attacks on cloud paradigm. In this paper, an attempt is made to compile in detail various detection and mitigation techniques against a variant of DDoS called the fraudulent resource consumption (FRC) attacks. It also considers the review of different techniques applied in machine learning that train the models for detecting the attacks. This paper also provides the review of the studies based on use of SDN technique used for detecting these attacks using its inbuilt features. The paper concludes by proposing a set of recommendations for future studies concerning the identification and prevention of FRC attacks, leveraging SDN and machine learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Optimization of fused deposition modelling printing parameters using hybrid GA-fuzzy evolutionary algorithm.
- Author
-
Deswal, Sandeep, Kaushik, Ashish, Garg, Ramesh Kumar, Sahdev, Ravinder Kumar, and Chhabra, Deepak
- Subjects
- *
FUSED deposition modeling , *FUZZY algorithms , *COMPRESSIVE strength , *SOFT computing , *FUZZY logic - Abstract
The present study investigates the compressive strength performance of polylactic acid (PLA) polymer material parts printed using the Fused Deposition Modelling (FDM) three-dimensional (3D) printing process, with a particular emphasis on various machine input parameters. The face centred central composite design matrix approach was employed for experimental modelling, which was subsequently utilised as a knowledge base for the fuzzy algorithm. A hybrid evolutionary algorithm, i.e., Genetic-Algorithm (GA) assisted with Fuzzy Logic Methodology (FLM), was used to optimize input process parameters and compressive strength of FDM technique fabricated polymer material parts. The study concluded that the maximum compressive strength observed with GA integrated FLM was 49.7303 MPa at input factors (layer thickness-0.16 mm, temperature 208°C, infill-pattern-Honeycomb, infill-density-60% and speed/extrusion velocity-41 mm/s) which is higher than the experimental (47.08 MPa) and fuzzy predicted (47.101 MPa) value. This evolutionary hybrid soft computing methodology has optimized the compressive strength of PLA polymer material parts at optimum parameters combination set. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Soft computing approaches for dynamic multi-objective evaluation of computational offloading: a literature review.
- Author
-
Khan, Sheharyar, Jiangbin, Zheng, and Ali, Hassan
- Subjects
- *
LITERATURE reviews , *SOFT computing , *MOBILE computing , *EDGE computing , *FUZZY logic - Abstract
Optimizing computational offloading in Mobile Edge Computing (MEC) environments presents a multifaceted challenge requiring innovative solutions. Soft computing, recognized for its ability to manage uncertainty and complexity, emerges as a promising approach for addressing the dynamic multi-objective evaluation inherent in computational offloading scenarios. This paper conducts a comprehensive review and analysis of soft computing approaches for Dynamic Multi-Objective Evaluation of Computational Offloading (DMOECO), aiming to identify trends, analyze existing literature, and offer insights for future research directions. Employing a systematic literature review (SLR) methodology, we meticulously scrutinize 50 research articles and scholarly publications spanning from 2016 to November 2023. Our review synthesizes advancements in soft computing techniques, including fuzzy logic, neural networks, evolutionary algorithms, and probabilistic reasoning, as applied to computational offloading optimization within MEC environments. Within this comprehensive review, existing approaches are categorized and analyzed into distinct research lines based on methodologies, objectives, evaluation metrics, and application domains. The evolution of soft computing-based DMOECO strategies is emphasized, showcasing their effectiveness in dynamically balancing various computational objectives, including energy consumption, latency, throughput, user experience, and other pertinent factors in computational offloading scenarios. Key challenges, including scalability issues, lack of real-world deployment validation, and the need for standardized evaluation benchmarks, are identified. Insights and recommendations are provided to enhance computational offloading optimization. Furthermore, collaborative efforts between academia and industry are advocated to bridge the theoretical developments with practical implementations. This study pioneers the use of SLR methodology, offering valuable perspectives on soft computing in DMOECO and synthesizing state-of-the-art approaches. It serves as a crucial resource for researchers, practitioners, and stakeholders in the MEC domain, illuminating trends and fostering continued innovation in computational offloading strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. A systematic review of applications of machine learning and other soft computing techniques for the diagnosis of tropical diseases
- Author
-
Attai, Kingsley, Amannejad, Yasaman, Pour, Maryam Vahdat, Obot, Okure, and Uzoka, Faith-Michael
- Published
- 2022
42. Drought prediction using artificial intelligence models based on climate data and soil moisture
- Author
-
Mhamd Saifaldeen Oyounalsoud, Abdullah Gokhan Yilmaz, Mohamed Abdallah, and Abdulrahman Abdeljaber
- Subjects
Meteorological drought ,Drought indices ,Forecasting ,Soft computing ,Drought indicators ,Medicine ,Science - Abstract
Abstract Drought is deemed a major natural disaster that can lead to severe economic and social implications. Drought indices are utilized worldwide for drought management and monitoring. However, as a result of the inherent complexity of drought phenomena and hydroclimatic condition differences, no universal drought index is available for effectively monitoring drought across the world. Therefore, this study aimed to develop a new meteorological drought index to describe and forecast drought based on various artificial intelligence (AI) models: decision tree (DT), generalized linear model (GLM), support vector machine, artificial neural network, deep learning, and random forest. A comparative assessment was conducted between the developed AI-based indices and nine conventional drought indices based on their correlations with multiple drought indicators. Historical records of five drought indicators, namely runoff, along with deep, lower, root, and upper soil moisture, were utilized to evaluate the models’ performance. Different combinations of climatic datasets from Alice Springs, Australia, were utilized to develop and train the AI models. The results demonstrated that the rainfall anomaly drought index was the best conventional drought index, scoring the highest correlation (0.718) with the upper soil moisture. The highest correlation between the new and conventional indices was found between the DT-based index and the rainfall anomaly index at a value of 0.97, whereas the lowest correlation was 0.57 between the GLM and the Palmer drought severity index. The GLM-based index achieved the best performance according to its high correlations with conventional drought indicators, e.g., a correlation coefficient of 0.78 with the upper soil moisture. Overall, the developed AI-based drought indices outperformed the conventional indices, hence contributing effectively to more accurate drought forecasting and monitoring. The findings emphasized that AI can be a promising and reliable prediction approach for achieving better drought assessment and mitigation.
- Published
- 2024
- Full Text
- View/download PDF
43. An Immense Approach of High Order Fuzzy Time Series Forecasting of Household Consumption Expenditures with High Precision
- Author
-
Burney Syed Muhammad Aqil, Khan Muhammad Shahbaz, Alim Affan, and Efendi Riswan
- Subjects
aper ,fuzzy numbers ,fuzzy relationship ,fuzzy sets ,fuzzy time series ,second-order fuzzy sets ,soft computing ,Computer software ,QA76.75-76.765 - Abstract
Fuzzy Time Series (Fts) models are experiencing an increase in popularity due to their effectiveness in forecasting and modelling diverse and intricate time series data sets. Essentially these models use membership functions and fuzzy logic relation functions to produce predicted outputs through a defuzzification process. In this study, we suggested using a Second Order Type-1 fts (S-O T-1 F-T-S) forecasting model for the analysis of time series data sets. The suggested method was compared to the state-of-theart First Order Type 1 Fts method. The suggested approach demonstrated superior performance compared to the First Order Type 1 Fts method when applied to household consumption data from the Magene Regency in Indonesia, as measured by absolute percentage error rate (APER).
- Published
- 2024
- Full Text
- View/download PDF
44. An optimal B-spline approach for vectorizing raster image outlines.
- Author
-
Abbas, Samreen, Hussain, Malik Zawwar, and ul-Ain, Qurat
- Subjects
- *
SOFT computing , *CURVE fitting , *MATHEMATICAL optimization , *GENETIC algorithms , *GENETIC techniques - Abstract
Capturing outlines is crucial in vectorization of the digital images. An effective and automatic algorithm is presented for outline vectorizing of planar objects within the digital images using trigonometric B-spline. A soft computing optimization technique Genetic Algorithm (GA) is employed to determine the suitable measures of parameter in the description of proposed B-spline. The anticipated scheme is executed on a few raster (bitmap), digital images to validate the robustness of the algorithm. The procedure of vectorizing outlines encompasses a series of stages like boundary detection, corner recognition, break point identification and curve fitting using the proposed B-spline. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Drought prediction using artificial intelligence models based on climate data and soil moisture.
- Author
-
Oyounalsoud, Mhamd Saifaldeen, Yilmaz, Abdullah Gokhan, Abdallah, Mohamed, and Abdeljaber, Abdulrahman
- Subjects
- *
DROUGHT forecasting , *ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *SOIL moisture , *ATMOSPHERIC models , *DROUGHT management , *WATERSHEDS - Abstract
Drought is deemed a major natural disaster that can lead to severe economic and social implications. Drought indices are utilized worldwide for drought management and monitoring. However, as a result of the inherent complexity of drought phenomena and hydroclimatic condition differences, no universal drought index is available for effectively monitoring drought across the world. Therefore, this study aimed to develop a new meteorological drought index to describe and forecast drought based on various artificial intelligence (AI) models: decision tree (DT), generalized linear model (GLM), support vector machine, artificial neural network, deep learning, and random forest. A comparative assessment was conducted between the developed AI-based indices and nine conventional drought indices based on their correlations with multiple drought indicators. Historical records of five drought indicators, namely runoff, along with deep, lower, root, and upper soil moisture, were utilized to evaluate the models' performance. Different combinations of climatic datasets from Alice Springs, Australia, were utilized to develop and train the AI models. The results demonstrated that the rainfall anomaly drought index was the best conventional drought index, scoring the highest correlation (0.718) with the upper soil moisture. The highest correlation between the new and conventional indices was found between the DT-based index and the rainfall anomaly index at a value of 0.97, whereas the lowest correlation was 0.57 between the GLM and the Palmer drought severity index. The GLM-based index achieved the best performance according to its high correlations with conventional drought indicators, e.g., a correlation coefficient of 0.78 with the upper soil moisture. Overall, the developed AI-based drought indices outperformed the conventional indices, hence contributing effectively to more accurate drought forecasting and monitoring. The findings emphasized that AI can be a promising and reliable prediction approach for achieving better drought assessment and mitigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Prediction of saturation exponent for subsurface oil and gas reservoirs using soft computing methods.
- Author
-
Yadav, Anupam, Aldulaimi, Saeed Hameed, Altalbawy, Farag M. A., Raja, Praveen K. N., Ramudu, M. Janaki, Juraev, Nizomiddin, Khalaf, Hameed Hassan, Bassam, Bassam Farman, Mohammed, Nada Qasim, Kassid, Dunya Jameel, Elawady, Ahmed, Sina, Mohammad, Ya Yao, and Qiang Li
- Subjects
GAS reservoirs ,PETROLEUM reservoirs ,SOFT computing ,RADIAL basis functions ,PETROLEUM industry ,COALBED methane - Abstract
The most widely used equation to calculate water saturation or suitable shaly water saturation in clean or shaly formation, respectively, is the modified Archie formula. The quality of Archie parameters including saturation exponent affects the preciseness of water saturation, and thus estimated oil and gas in place. Therefore, estimating the saturation exponent by the soft computation methods deems to be necessary. In this study, intelligent models such as multilayer perceptron neural network, least squares support vector machine, radial basis function neural network, and adaptive neuro-fuzzy inference system are developed to predict saturation exponent in terms of petrophysical data including porosity, absolute permeability, water saturation, true resistivity, and resistivity index by utilizing a databank for middle east oil and gas reservoirs. The introduced models are optimized using particle swarm optimization, genetic algorithm, and levenberg marquardt techniques. Graphical and statistical methods are used to demonstrate the capability of the constructed models. Based on the statistical indexes obtained for each model, it is found that radial basis function neural network, multilayer perceptron neural network, and least squares support vector machine are the most robust models as they possess the smallest mean squared error, root mean squared error and average absolute relative error as well as highest coefficient of determination. Moreover, the sensitivity analysis indicates that water saturation has the most effect and porosity has the least effect on the saturation exponent. The developed models are simple-to-use and time-consuming tools to predict saturation exponent without needing laboratory methods which are tedious and arduous. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Real‐time topology optimization via learnable mappings.
- Author
-
Garayalde, Gabriel, Torzoni, Matteo, Bruggi, Matteo, and Corigliano, Alberto
- Subjects
TOPOLOGY ,DEEP learning ,LATENT variables - Abstract
In traditional topology optimization, the computing time required to iteratively update the material distribution within a design domain strongly depends on the complexity or size of the problem, limiting its application in real engineering contexts. This work proposes a multi‐stage machine learning strategy that aims to predict an optimal topology and the related stress fields of interest, either in 2D or 3D, without resorting to any iterative analysis and design process. The overall topology optimization is treated as regression task in a low‐dimensional latent space, that encodes the variability of the target designs. First, a fully‐connected model is employed to surrogate the functional link between the parametric input space characterizing the design problem and the latent space representation of the corresponding optimal topology. The decoder branch of an autoencoder is then exploited to reconstruct the desired optimal topology from its latent representation. The deep learning models are trained on a dataset generated through a standard method of topology optimization implementing the solid isotropic material with penalization, for varying boundary and loading conditions. The underlying hypothesis behind the proposed strategy is that optimal topologies share enough common patterns to be compressed into small latent space representations without significant information loss. Results relevant to a 2D Messerschmitt‐Bölkow‐Blohm beam and a 3D bridge case demonstrate the capabilities of the proposed framework to provide accurate optimal topology predictions in a fraction of a second. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Insight into the thermal transport by considering the modified Buongiorno model during the silicon oil-based hybrid nanofluid flow: probed by artificial intelligence.
- Author
-
Ullah, Asad, Yao, Hongxing, Ullah, Farid, Alqahtani, Haifa, Ismail, Emad A. A., Awwad, Fuad A., Shaaban, Abeer A., Shahzad, Hasan, and Zabihi, Ali
- Subjects
ARTIFICIAL neural networks ,DIMENSIONLESS numbers ,HEAT radiation & absorption ,MATHEMATICAL forms ,ARTIFICIAL intelligence - Abstract
This work aims to analyze the impacts of the magnetic field, activation of energy, thermal radiation, thermophoresis, and Brownian effects on the hybrid nanofluid (HNF) (Ag++silicon oil) flow past a porous spinning disk. The pressure loss due to porosity is constituted by the Darcy-Forchheimer relation. The modified Buongiorno model is considered for simulating the flow field into a mathematical form. The modeled problem is further simplified with the new group of dimensionless variables and further transformed into a first-order system of equations. The reduced system is further analyzed with the Levenberg-Marquardt algorithm using a trained artificial neural network (ANN) with a tolerance, step size of 0.001, and 1,000 epochs. The state variables under the impacts of the pertinent parameters are assessed with graphs and tables. It has been observed that when the magnetic parameter increases, the velocity gradient of mono and hybrid nanofluids (NFs) decreases. As the input of the Darcy-Forchheimer parameter increases, the velocity profiles decrease. The result shows that as the thermophoresis parameter increases, temperature and concentration increase as well. When the activation energy parameter increases, the concentration profile becomes higher. For a deep insight into the analysis of the problem, a statistical approach for data fitting in the form of regression lines and error histograms for NF and HNF is presented. The regression lines show that 100% of the data is used in curve fitting, while the error histograms depict the minimal zero error -7.1e6 for the increasing values of Nt. Furthermore, the mean square error and performance validation for each varying parameter are presented. For validation, the present results are compared with the available literature in the form of a table, where the current results show great agreement with the existing one. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Bearing capacity prediction of strip and ring footings embedded in layered sand.
- Author
-
Das, Pragyan Paramita and Khatri, Vishwas N.
- Subjects
- *
ARTIFICIAL neural networks , *RANDOM forest algorithms , *SOFT computing , *SAND - Abstract
A prediction model for the bearing capacity estimation of strip and ring footing embedded in layered sand is proposed using soft computing approaches, namely, artificial neural network (ANN) and random forest regression (RFR). The required data for the model preparation were generated by performing lower- and upper-bound finite-elements limit analysis by varying the properties of the top and bottom layers. Two types of layered sand conditions are considered in the study: (a) dense on loose sand; (b) loose on dense sand. The investigation for strip footing was carried out by varying the thickness of the top layer, embedment depth of the foundation and friction angles of top and bottom layers. For a ring footing, the internal-to-external diameter ratio forms an additional variable. In total, 1222 and 4204 data sets were generated for strip and ring footings, respectively. The performance measures obtained during the training and testing phase suggest that the RFR model outperforms the ANN. Also, following the literature, an analytical model was developed to predict the bearing capacity of strip footing on layered sand. The ANN and the generated analytical model predictions agreed with the published experimental data in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Prediction of reinforced concrete walls shear strength based on soft computing-based techniques.
- Author
-
Tabrizikahou, Alireza, Pavić, Gordana, Shahsavani, Younes, and Hadzima-Nyarko, Marijana
- Subjects
- *
CONCRETE walls , *SHEAR strength , *SHEAR walls , *SOFT computing , *GENETIC algorithms - Abstract
The precise estimation of the shear strength of reinforced concrete walls is critical for structural engineers. This projection, nevertheless, is exceedingly complicated because of the varied structural geometries, plethora of load cases, and highly nonlinear relationships between the design requirements and the shear strength. Recent related design code regulations mostly depend on experimental formulations, which have a variety of constraints and establish low prediction accuracy. Hence, different soft computing techniques are used in this study to evaluate the shear capacity of reinforced concrete walls. In particular, developed models for estimating the shear capacity of concrete walls have been investigated, based on experimental test data accessible in the relevant literature. Adaptive neuro-fuzzy inference system, the integrated genetic algorithms, and the integrated particle swarm optimization methods were used to optimize the fuzzy model's membership function range and the results were compared to the outcomes of random forests (RF) model. To determine the accuracy of the models, the results were assessed using several indices. Outliers in the anticipated data were identified and replaced with appropriate values to ensure prediction accuracy. The comparison of the resulting findings with the relevant experimental data demonstrates the potential of hybrid models to determine the shear capacity of reinforced concrete walls reliably and effectively. The findings revealed that the RF model with RMSE = 151.89, MAE = 111.52, and R 2 = 0.9351 has the best prediction accuracy. Integrated GAFIS and PSOFIS performed virtually identically and had fewer errors than ANFIS. The sensitivity analysis shows that the thickness of the wall (b w) and concrete compressive strength (f c) have the most and the least effects on shear strength, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.