1,621 results on '"hybrid models"'
Search Results
2. Statistical Methods in Forecasting Water Consumption: A Review of Previous Literature
- Author
-
Mukhlif, Anmar Jabbar, Mustafa, Ayad S., Al-Somaydaii, Jumaa A., Karkush, Mahdi, editor, Choudhury, Deepankar, editor, and Fattah, Mohammed, editor
- Published
- 2025
- Full Text
- View/download PDF
3. An improved hybrid model for shoreline change.
- Author
-
Lakku, Naresh Kumar Goud, Chowdhury, Piyali, and Behera, Manasa Ranjan
- Subjects
COASTAL zone management ,SEDIMENT transport ,GEOMORPHOLOGY ,COASTS ,BEACHES ,LITTORAL drift ,SHORELINES - Abstract
Predicting the nearshore sediment transport and shifts in coastlines in view of climate change is important for planning and management of coastal infrastructure and requires an accurate prediction of the regional wave climate as well as an in-depth understanding of the complex morphology surrounding the area of interest. Recently, hybrid shoreline evolution models are being used to inform coastal management. These models typically apply the one-line theory to estimate changes in shoreline morphology based on littoral drift gradients calculated from a 2DH coupled wave, flow, and sediment transport model. As per the one-line theory, the calculated littoral drift is uniformly distributed over the active coastal profile. A key challenge facing the application of hybrid models is that they fail to consider complex morphologies when updating the shorelines for several scenarios. This is mainly due to the scarcity of field datasets on beach behavior and nearshore morphological change that extends up to the local depth of closure, leading to assumptions in this value in overall shoreline shift predictions. In this study, we propose an improved hybrid model for shoreline shift predictions in an open sandy beach system impacted by human interventions and changes in wave climate. Three main conclusions are derived from this study. First, the optimal boundary conditions for modeling shoreline evolution need to vary according to local coastal geomorphology and processes. Second, specifying boundary conditions within physically realistic ranges does not guarantee reliable shoreline evolution predictions. Third, hybrid 2D/one-line models have limited applicability in simple planform morphologies where the active beach profile is subject to direct impacts due to wave action and/or human interventions, plausibly due to the one-line theory assumption of a constant time-averaged coastal profile. These findings provide insightful information into the drivers of shoreline evolution around sandy beaches, which have practical implications for advancing the shoreline evolution models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Hybrid Modeling Techniques for Municipal Solid Waste Forecasting: An Application to OECD Countries.
- Author
-
Chellai, Fatih
- Subjects
- *
WASTE minimization , *WASTE management , *WASTE recycling , *SOLID waste , *STATISTICAL smoothing - Abstract
Accurate forecasting of municipal solid waste (MSW) generation is critical for effective waste management, given the rising volumes of waste posing environmental and public health challenges. This study investigates the efficacy of hybrid forecasting models in predicting MSW generation trends across Organization for Economic Cooperation and Development (OECD) countries. The empirical analysis utilizes five distinct approaches – ARIMA, Theta model, neural networks, exponential smoothing state space (ETS), and TBATS models. MSW data spanning 1995–2021 for 29 OECD nations are analyzed using the hybrid models and benchmarked against individual ARIMA models. The results demonstrate superior predictive accuracy for the hybrid models across multiple error metrics, capturing complex data patterns and relationships missed by individual models. The forecasts project continued MSW generation growth in most countries but reveal nuanced country-level differences as well. The implications for waste management policies include implementing waste reduction and recycling programs, investing in infrastructure and technology, enhancing public education, implementing pricing incentives, rigorous monitoring and evaluation of practices, and multi-stakeholder collaboration. However, uncertainties related to model selection and data limitations warrant acknowledgment. Overall, this study affirms the value of hybrid forecasting models in providing robust insights to inform evidence-based waste management strategies and transition toward sustainability in the OECD region. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Forecasting Multi-Step Soil Moisture with Three-Phase Hybrid Wavelet-Least Absolute Shrinkage Selection Operator-Long Short-Term Memory Network (moDWT-Lasso-LSTM) Model.
- Author
-
Jayasinghe, W. J. M. Lakmini Prarthana, Deo, Ravinesh C., Raj, Nawin, Ghimire, Sujan, Yaseen, Zaher Mundher, Nguyen-Huy, Thong, and Ghahramani, Afshin
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,DISCRETE wavelet transforms ,FEATURE selection ,INDEPENDENT variables ,DEEP learning - Abstract
To develop agricultural risk management strategies, the early identification of water deficits during the growing cycle is critical. This research proposes a deep learning hybrid approach for multi-step soil moisture forecasting in the Bundaberg region in Queensland, Australia, with predictions made for 1-day, 14-day, and 30-day, intervals. The model integrates Geospatial Interactive Online Visualization and Analysis Infrastructure (Giovanni) satellite data with ground observations. Due to the periodicity, transience, and trends in soil moisture of the top layer, time series datasets were complex. Hence, the Maximum Overlap Discrete Wavelet Transform (moDWT) method was adopted for data decomposition to identify the best correlated wavelet and scaling coefficients of the predictor variables with the target top layer moisture. The proposed 3-phase hybrid moDWT-Lasso-LSTM model used the Least Absolute Shrinkage and Selection Operator (Lasso) method for feature selection. Optimal hyperparameters were identified using the Hyperopt algorithm with deep learning LSTM method. This proposed model's performances were compared with benchmarked machine learning (ML) models. In total, nine models were developed, including three standalone models (e.g., LSTM), three integrated feature selection models (e.g., Lasso-LSTM), and three hybrid models incorporating wavelet decomposition and feature selection (e.g., moDWT-Lasso-LSTM). Compared to alternative models, the hybrid deep moDWT-Lasso-LSTM produced the superior predictive model across statistical performance metrics. For example, at 1-day forecast, The moDWT-Lasso-LSTM model exhibits the highest accuracy with the highest R 2 ≈ 0.92469 and the lowest RMSE ≈ 0.97808 , MAE ≈ 0.76623 , and SMAPE ≈ 4.39700 %, outperforming other models. The moDWT-Lasso-DNN model follows closely, while the Lasso-ANN and Lasso-DNN models show lower accuracy with higher RMSE and MAE values. The ANN and DNN models have the lowest performance, with higher error metrics and lower R2 values compared to the deep learning models incorporating moDWT and Lasso techniques. This research emphasizes the utility of the advanced complementary ML model, such as the developed moDWT-Lasso-LSTM 3-phase hybrid model, as a robust data-driven tool for early forecasting of soil moisture. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Enhancing stress detection in wearable IoT devices using federated learning and LSTM based hybrid model.
- Author
-
Mouhni, Naoual, Amalou, Ibtissam, Chakri, Sana, Tourad, Mohamedou Cheikh, Chakraoui, Mohamed, and Abdali, Abdelmounaim
- Subjects
CONVOLUTIONAL neural networks ,FEDERATED learning ,BLENDED learning ,RANDOM forest algorithms ,DEEP learning - Abstract
In the domain of smart health devices, the accurate detection of physical indicators levels plays a crucial role in enhancing safety and well-being. This paper introduces a cross device federated learning framework using hybrid deep learning model. Specifically, the paper presents a comprehensive comparison of different combination of long short-term memory (LSTM), gated recurrent unit (GRU), convolutional neural network (CNN), random forest (RF), and extreme gradient boosting (XGBoost), in order to forecast stress levels by utilizing time series information derived from wearable smart gadgets. The LSTM-RF model demonstrated the highest level of accuracy, achieving 93.53% for user 1, 99.40% for user 2, and 97.88% for user 3. Similarly, the LSTM-XGBoost model yielded favorable outcomes, with accuracy rates of 85.88%, 98.55%, and 92.02% for users 1, 2, and 3, respectively, out of 23 users studied. These findings highlight the efficacy of federated learning and the utilization of hybrid models in stress detection. Unlike traditional centralized learning paradigms, the presented federated approach ensures privacy preservation and reduces data transmission requirements by processing data locally on Edge devices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Hybrid modeling approach for precise estimation of energy production and consumption based on temperature variations.
- Author
-
Mbasso, Wulfran Fendzi, Molu, Reagan Jean Jacques, Harrison, Ambe, Pushkarna, Mukesh, Kemdoum, Fritz Nguemo, Donfack, Emmanuel Fendzi, Jangir, Pradeep, Tiako, Pierre, and Tuka, Milkias Berhanu
- Subjects
- *
INDEPENDENT variables , *CONSUMPTION (Economics) , *CLIMATE change , *LOW temperatures , *HIGH temperatures - Abstract
This study introduces an advanced mathematical methodology for predicting energy generation and consumption based on temperature variations in regions with diverse climatic conditions and increasing energy demands. Using a comprehensive dataset of monthly energy production, consumption, and temperature readings spanning ten years (2010–2020), we applied polynomial, sinusoidal, and hybrid modeling techniques to capture the non-linear and cyclical relationships between temperature and energy metrics. The hybrid model, which combines sinusoidal and polynomial functions, achieved an accuracy of 79.15% in estimating energy consumption using temperature as a predictor variable. This model effectively captures the seasonal and non-linear consumption patterns, demonstrating a significant improvement over conventional models. In contrast, the polynomial model for energy production, while yielding partial accuracy (R² = 0.65), highlights the need for more advanced techniques to fully capture the temperature-dependent nature of energy production. The results indicate that temperature variations significantly affect energy consumption, with higher temperatures driving increased energy demand for cooling, while lower temperatures affect production efficiency, particularly in systems like hydropower. These findings underscore the necessity for integrating sophisticated models into energy planning to ensure resilience in energy systems amidst climate variability. The study offers critical insights for policymakers to optimize energy generation and distribution in response to changing climatic conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Early Cervical Cancer Diagnosis with SWIN-Transformer and Convolutional Neural Networks.
- Author
-
Mohammed, Foziya Ahmed, Tune, Kula Kekeba, Mohammed, Juhar Ahmed, Wassu, Tizazu Alemu, and Muhie, Seid
- Subjects
- *
CONVOLUTIONAL neural networks , *TRANSFORMER models , *DATA augmentation , *IMAGE recognition (Computer vision) , *EARLY detection of cancer - Abstract
Introduction: Early diagnosis of cervical cancer at the precancerous stage is critical for effective treatment and improved patient outcomes. Objective: This study aims to explore the use of SWIN Transformer and Convolutional Neural Network (CNN) hybrid models combined with transfer learning to classify precancerous colposcopy images. Methods: Out of 913 images from 200 cases obtained from the Colposcopy Image Bank of the International Agency for Research on Cancer, 898 met quality standards and were classified as normal, precancerous, or cancerous based on colposcopy and histopathological findings. The cases corresponding to the 360 precancerous images, along with an equal number of normal cases, were divided into a 70/30 train–test split. The SWIN Transformer and CNN hybrid model combines the advantages of local feature extraction by CNNs with the global context modeling by SWIN Transformers, resulting in superior classification performance and a more automated process. The hybrid model approach involves enhancing image quality through preprocessing, extracting local features with CNNs, capturing the global context with the SWIN Transformer, integrating these features for classification, and refining the training process by tuning hyperparameters. Results: The trained model achieved the following classification performances on fivefold cross-validation data: a 94% Area Under the Curve (AUC), an 88% F1 score, and 87% accuracy. On two completely independent test sets, which were never seen by the model during training, the model achieved an 80% AUC, a 75% F1 score, and 75% accuracy on the first test set (precancerous vs. normal) and an 82% AUC, a 78% F1 score, and 75% accuracy on the second test set (cancer vs. normal). Conclusions: These high-performance metrics demonstrate the models' effectiveness in distinguishing precancerous from normal colposcopy images, even with modest datasets, limited data augmentation, and the smaller effect size of precancerous images compared to malignant lesions. The findings suggest that these techniques can significantly aid in the early detection of cervical cancer at the precancerous stage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Data‐Driven Approach Using a Hybrid Model for Predicting Oxygen Consumption in Argon Oxygen Decarburization Converter.
- Author
-
Mingming, Li, Xihong, Chen, Dongxu, Liu, Lei, Shao, Wentao, Zhou, and Zongshu, Zou
- Abstract
Accurately controlling oxygen supply in argon oxygen decarburization (AOD) process is invariably desired for efficient decarburization and reducing alloying elements consumption. Herein, a data‐driven approach using a hybrid model integrating oxygen balance mechanism model and a two‐layer Stacking ensemble learning model is successfully established for predicting oxygen consumption in AOD converter. In this hybrid model, the oxygen balance mechanism model is used to calculate the oxygen consumption based on industrial data. Then the model calculation error is compensated using an optimized two‐layer Stacking model that is identified as (random forest (RF) + XGBoost + ridge regression)‐RF model by evaluating different hybrid model frameworks and Bayesian optimization. The results show that, in comparison to conventional prediction model based on oxygen balance mechanism, the present hybrid model greatly improves the control accuracy of oxygen consumption in AOD industrial production. The hit rate and mean absolute error of the present hybrid model for predicting oxygen consumption are 84.8% and 330 Nm3, respectively, within absolute oxygen consumption prediction error ±600 Nm3 (relative error of 3.8%). This data‐driven approach using the present hybrid model provides one pathway to efficient oxygen consumption control in AOD process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Hybrid model approach in data mining.
- Author
-
Bakirarar, Batuhan, Cosgun, Erdal, and Elhan, Atilla Halil
- Subjects
- *
SUPERVISED learning , *MACHINE learning , *DATA mining , *DATABASES , *INDEPENDENT variables - Abstract
Studies on hybrid data mining approach has been increasing in recent years. Hybrid data mining is defined as an effective combination of various data mining techniques to use the power of each technique and compensate for each other's weaknesses. The purpose of this study is to present state-of-the-art data mining algorithms and applications and to propose a new hybrid data mining approach for classifying medical data. In addition, in the study, it was aimed to calculate performance metrics of data mining methods and to compare these metrics with the metrics obtained from the hybrid model. The study utilized simulated datasets produced on the basis of various scenarios and hepatitis dataset obtained from the UCI database. Supervised learning algorithms were used. In addition, hybrid models were created by combining these algorithms. In simulated datasets, it was observed that MCC values increased with a higher sample size and higher correlation between the independent variables. In addition, as the correlation between independent variables increased in imbalanced datasets, a noticeable increase was observed in the performance metrics of the group with lower sample size. A similar case was observed with the actual datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Hybrid Long Short-Term Memory Wavelet Transform Models for Short-Term Electricity Load Forecasting.
- Author
-
Guenoukpati, Agbassou, Agbessi, Akuété Pierre, Salami, Adekunlé Akim, and Bakpo, Yawo Amen
- Subjects
- *
STANDARD deviations , *ARTIFICIAL neural networks , *ELECTRIC networks , *ELECTRICAL load , *LOAD forecasting (Electric power systems) , *ELECTRICAL energy - Abstract
To ensure the constant availability of electrical energy, power companies must consistently maintain a balance between supply and demand. However, electrical load is influenced by a variety of factors, necessitating the development of robust forecasting models. This study seeks to enhance electricity load forecasting by proposing a hybrid model that combines Sorted Coefficient Wavelet Decomposition with Long Short-Term Memory (LSTM) networks. This approach offers significant advantages in reducing algorithmic complexity and effectively processing patterns within the same class of data. Various models, including Stacked LSTM, Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network—Long Short-Term Memory (CNN-LSTM), and Convolutional Long Short-Term Memory (ConvLSTM), were compared and optimized using grid search with cross-validation on consumption data from Lome, a city in Togo. The results indicate that the ConvLSTM model outperforms its counterparts based on Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), and correlation coefficient (R2) metrics. The ConvLSTM model was further refined using wavelet decomposition with coefficient sorting, resulting in the WT+ConvLSTM model. This proposed approach significantly narrows the gap between actual and predicted loads, reducing discrepancies from 10–50 MW to 0.5–3 MW. In comparison, the WT+ConvLSTM model surpasses Autoregressive Integrated Moving Average (ARIMA) models and Multilayer Perceptron (MLP) type artificial neural networks, achieving a MAPE of 0.485%, an RMSE of 0.61 MW, and an R2 of 0.99. This approach demonstrates substantial robustness in electricity load forecasting, aiding stakeholders in the energy sector to make more informed decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Hybrid RNNs and USE for enhanced sequential sentence classification in biomedical paper abstracts.
- Author
-
Ndama, Oussama, Bensassi, Ismail, and En-Naimi, El Mokhtar
- Subjects
RECURRENT neural networks ,DATA mining ,LEXICAL access ,MEDICAL research ,ALGORITHMS - Abstract
This research evaluates a number of hybrid recurrent neural network (RNN) architectures for classifying sequential sentences in biomedical abstracts. The architectures include long short-term memory (LSTM), bidirectional LSTM (BI-LSTM), gated recurrent unit (GRU), and bidirectional GRU (BIGRU) models, all of which are combined with the universal sentence encoder (USE). The investigation assesses their efficacy in categorizing sentences into predefined classes: background, objective, method, result, and conclusion. Each RNN variant is used with the pre-trained USE as word embeddings to find complex sequential relationships in biomedical text. Results demonstrate the adaptability and effectiveness of these hybrid architectures in discerning diverse sentence functions. This research addresses the need for improved literature comprehension in biomedicine by employing automated sentence classification techniques, highlighting the significance of advanced hybrid algorithms in enhancing text classification methodologies within biomedical research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Compressive Strength Prediction of Fly Ash-Based Concrete Using Single and Hybrid Machine Learning Models.
- Author
-
Li, Haiyu, Chung, Heungjin, Li, Zhenting, and Li, Weiping
- Subjects
ARTIFICIAL neural networks ,MACHINE learning ,CONVOLUTIONAL neural networks ,TRANSFORMER models ,ARTIFICIAL intelligence - Abstract
The compressive strength of concrete is a crucial parameter in structural design, yet its determination in a laboratory setting is both time-consuming and expensive. The prediction of compressive strength in fly ash-based concrete can be accelerated through the use of machine learning algorithms with artificial intelligence, which can effectively address the problems associated with this process. This paper presents the most innovative model algorithms established based on artificial intelligence technology. These include three single models—a fully connected neural network model (FCNN), a convolutional neural network model (CNN), and a transformer model (TF)—and three hybrid models—FCNN + CNN, TF + FCNN, and TF + CNN. A total of 471 datasets were employed in the experiments, comprising 7 input features: cement (C), fly ash (FA), water (W), superplasticizer (SP), coarse aggregate (CA), fine aggregate (S), and age (D). Six models were subsequently applied to predict the compressive strength (CS) of fly ash-based concrete. Furthermore, the loss function curves, assessment indexes, linear correlation coefficient, and the related literature indexes of each model were employed for comparison. This analysis revealed that the FCNN + CNN model exhibited the highest prediction accuracy, with the following metrics: R
2 = 0.95, MSE = 14.18, MAE = 2.32, SMAPE = 0.1, and R = 0.973. Additionally, SHAP was utilized to elucidate the significance of the model parameter features. The findings revealed that C and D exerted the most substantial influence on the model prediction outcomes, followed by W and FA. Nevertheless, CA, S, and SP demonstrated comparatively minimal influence. Finally, a GUI interface for predicting compressive strength was developed based on six models and nonlinear functional relationships, and a criterion for minimum strength was derived by comparison and used to optimize a reasonable mixing ratio, thus achieving a fast data-driven interaction that was concise and reliable. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
14. Hybrid deep learning models with data fusion approach for electricity load forecasting.
- Author
-
Özen, Serkan, Yazıcı, Adnan, and Atalay, Volkan
- Subjects
- *
CONVOLUTIONAL neural networks , *DEEP learning , *MULTISENSOR data fusion , *ELECTRIC power consumption , *BLENDED learning - Abstract
This study explores the application of deep learning in forecasting electricity consumption. Initially, we assess the performance of standard neural networks, such as convolutional neural networks (CNN) and long short‐term memory (LSTM), along with basic methods like ARIMA and random forest, on a univariate electricity consumption data set. Subsequently, we develop hybrid models for a comprehensive multivariate data set created by merging weather and electricity data. These hybrid models demonstrate superior performance compared to individual models on the univariate data set. Our main contribution is the introduction of a novel hybrid data fusion model. This model integrates a single‐model approach for univariate data, a hybrid model for multivariate data, and a linear regression model that processes the outputs from both. Our hybrid fusion model achieved an RMSE value of 0.0871 on the Chicago data set, outperforming other models such as Random Forest (0.2351), ARIMA (0.2184), CNN (0.1802), LSTM + LSTM (0.1496), and CNN + LSTM (0.1587). Additionally, our model surpassed the performance of our base transformer model. Furthermore, combining the best‐performing transformer model, with a Gaussian Process model resulted in further improvement in performance. The Transformer + Gaussian model achieved an RMSE of 0.0768, compared with 0.0781 for the single transformer model. Similar trends were observed in the Pittsburgh and IHEC data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Machine learning and deep learning models based grid search cross validation for short-term solar irradiance forecasting.
- Author
-
El-Shahat, Doaa, Tolba, Ahmed, Abouhawwash, Mohamed, and Abdel-Basset, Mohamed
- Subjects
MACHINE learning ,SUSTAINABILITY ,TECHNICAL specifications ,ARTIFICIAL intelligence ,STANDARD deviations ,DEEP learning - Abstract
In late 2023, the United Nations conference on climate change (COP28), which was held in Dubai, encouraged a quick move from fossil fuels to renewable energy. Solar energy is one of the most promising forms of energy that is both sustainable and renewable. Generally, photovoltaic systems transform solar irradiance into electricity. Unfortunately, instability and intermittency in solar radiation can lead to interruptions in electricity production. The accurate forecasting of solar irradiance guarantees sustainable power production even when solar irradiance is not present. Batteries can store solar energy to be used during periods of solar absence. Additionally, deterministic models take into account the specification of technical PV systems and may be not accurate for low solar irradiance. This paper presents a comparative study for the most common Deep Learning (DL) and Machine Learning (ML) algorithms employed for short-term solar irradiance forecasting. The dataset was gathered in Islamabad during a five-year period, from 2015 to 2019, at hourly intervals with accurate meteorological sensors. Furthermore, the Grid Search Cross Validation (GSCV) with five folds is introduced to ML and DL models for optimizing the hyperparameters of these models. Several performance metrics are used to assess the algorithms, such as the Adjusted R
2 score, Normalized Root Mean Square Error (NRMSE), Mean Absolute Deviation (MAD), Mean Absolute Error (MAE) and Mean Square Error (MSE). The statistical analysis shows that CNN-LSTM outperforms its counterparts of nine well-known DL models with Adjusted R2 score value of 0.984. For ML algorithms, gradient boosting regression is an effective forecasting method with Adjusted R2 score value of 0.962, beating its rivals of six ML models. Furthermore, SHAP and LIME are examples of explainable Artificial Intelligence (XAI) utilized for understanding the reasons behind the obtained results. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
16. Cell factory design with advanced metabolic modelling empowered by artificial intelligence.
- Author
-
Lu, Hongzhong, Xiao, Luchi, Liao, Wenbin, Yan, Xuefeng, and Nielsen, Jens
- Subjects
- *
MACHINE learning , *FACTORY design & construction , *METABOLIC models , *ARTIFICIAL intelligence , *BIOLOGICAL models , *SYNTHETIC biology - Abstract
Advances in synthetic biology and artificial intelligence (AI) have provided new opportunities for modern biotechnology. High-performance cell factories, the backbone of industrial biotechnology, are ultimately responsible for determining whether a bio-based product succeeds or fails in the fierce competition with petroleum-based products. To date, one of the greatest challenges in synthetic biology is the creation of high-performance cell factories in a consistent and efficient manner. As so-called white-box models, numerous metabolic network models have been developed and used in computational strain design. Moreover, great progress has been made in AI-powered strain engineering in recent years. Both approaches have advantages and disadvantages. Therefore, the deep integration of AI with metabolic models is crucial for the construction of superior cell factories with higher titres, yields and production rates. The detailed applications of the latest advanced metabolic models and AI in computational strain design are summarized in this review. Additionally, approaches for the deep integration of AI and metabolic models are discussed. It is anticipated that advanced mechanistic metabolic models powered by AI will pave the way for the efficient construction of powerful industrial chassis strains in the coming years. • Advanced mechanistic metabolic models enhance rational design of cell factories • Machine learning models refine reconstruction of functional metabolic models. • Data-driven AI models provide alternative solutions for strain design in DBTL cycle. • Hybrid AI models with biological insights boost precision in cell factory design. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Dealing with Anomalies in Day-Ahead Market Prediction Using Machine Learning Hybrid Model.
- Author
-
Pilot, Karol, Ganczarek-Gamrot, Alicja, and Kania, Krzysztof
- Subjects
- *
MACHINE learning , *ENERGY industries , *ELECTRICITY pricing , *ELECTRICITY markets , *PREDICTION models - Abstract
Forecasting the electricity market, even in the short term, is a difficult task, due to the nature of this commodity, the lack of storage capacity, and the multiplicity and volatility of factors that influence its price. The sensitivity of the market results in the appearance of anomalies in the market, during which forecasting models often break down. The aim of this paper is to present the possibility of using hybrid machine learning models to forecast the price of electricity, especially when such events occur. It includes the automatic detection of anomalies using three different switch types and two independent forecasting models, one for use during periods of stable markets and the other during periods of anomalies. The results of empirical tests conducted on data from the Polish energy market showed that the proposed solution improves the overall quality of prediction compared to using each model separately and significantly improves the quality of prediction during anomaly periods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A General Framework for Generating Three-Components Heavy-Tailed Distributions with Application.
- Author
-
Osatohanmwen, Patrick, Oyegue, Francis O., Ogbonmwan, Sunday M., and Muhwava, William
- Subjects
DISTRIBUTION (Probability theory) ,EXTREME value theory ,VALUE distribution theory ,DATA distribution ,PARAMETER estimation - Abstract
The estimation of a certain threshold beyond which an extreme value distribution can be fitted to the tail of a data distribution remains one of the main issues in the theory of statistics of extremes. While standard Peak over Threshold (PoT) approaches determine this threshold graphically, we introduce in this paper a general framework which makes it possible for one to determine this threshold algorithmically by estimating it as a free parameter within a composite distribution. To see how this threshold point arises, we propose a general framework for generating three-component hybrid distributions which meets the need of data sets with right heavy-tail. The approach involves the combination of a distribution which can efficiently model the bulk of the data around the mean, with an heavy-tailed distribution meant to model the data observations in the tail while using another distribution as a link to connect the two. Some special examples of distributions resulting from the general framework are generated and studied. An estimation algorithm based on the maximum likelihood method is proposed for the estimation of the free parameters of the hybrid distributions. Application of the hybrid distributions to the S &P 500 index financial data set is also carried out. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Hybrid modeling approach for precise estimation of energy production and consumption based on temperature variations
- Author
-
Wulfran Fendzi Mbasso, Reagan Jean Jacques Molu, Ambe Harrison, Mukesh Pushkarna, Fritz Nguemo Kemdoum, Emmanuel Fendzi Donfack, Pradeep Jangir, Pierre Tiako, and Milkias Berhanu Tuka
- Subjects
Energy modeling ,Temperature impact ,Hybrid models ,Polynomial regression ,Sinusoidal functions ,Energy consumption ,Medicine ,Science - Abstract
Abstract This study introduces an advanced mathematical methodology for predicting energy generation and consumption based on temperature variations in regions with diverse climatic conditions and increasing energy demands. Using a comprehensive dataset of monthly energy production, consumption, and temperature readings spanning ten years (2010–2020), we applied polynomial, sinusoidal, and hybrid modeling techniques to capture the non-linear and cyclical relationships between temperature and energy metrics. The hybrid model, which combines sinusoidal and polynomial functions, achieved an accuracy of 79.15% in estimating energy consumption using temperature as a predictor variable. This model effectively captures the seasonal and non-linear consumption patterns, demonstrating a significant improvement over conventional models. In contrast, the polynomial model for energy production, while yielding partial accuracy (R² = 0.65), highlights the need for more advanced techniques to fully capture the temperature-dependent nature of energy production. The results indicate that temperature variations significantly affect energy consumption, with higher temperatures driving increased energy demand for cooling, while lower temperatures affect production efficiency, particularly in systems like hydropower. These findings underscore the necessity for integrating sophisticated models into energy planning to ensure resilience in energy systems amidst climate variability. The study offers critical insights for policymakers to optimize energy generation and distribution in response to changing climatic conditions.
- Published
- 2024
- Full Text
- View/download PDF
20. A General Framework for Generating Three-Components Heavy-Tailed Distributions with Application
- Author
-
Patrick Osatohanmwen, Francis O. Oyegue, Sunday M. Ogbonmwan, and William Muhwava
- Subjects
Extreme value theory ,Heavy-tailed distribution ,Hybrid models ,Maximum likelihood estimation ,S& P 500 index ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
Abstract The estimation of a certain threshold beyond which an extreme value distribution can be fitted to the tail of a data distribution remains one of the main issues in the theory of statistics of extremes. While standard Peak over Threshold (PoT) approaches determine this threshold graphically, we introduce in this paper a general framework which makes it possible for one to determine this threshold algorithmically by estimating it as a free parameter within a composite distribution. To see how this threshold point arises, we propose a general framework for generating three-component hybrid distributions which meets the need of data sets with right heavy-tail. The approach involves the combination of a distribution which can efficiently model the bulk of the data around the mean, with an heavy-tailed distribution meant to model the data observations in the tail while using another distribution as a link to connect the two. Some special examples of distributions resulting from the general framework are generated and studied. An estimation algorithm based on the maximum likelihood method is proposed for the estimation of the free parameters of the hybrid distributions. Application of the hybrid distributions to the S &P 500 index financial data set is also carried out.
- Published
- 2024
- Full Text
- View/download PDF
21. Machine learning and deep learning models based grid search cross validation for short-term solar irradiance forecasting
- Author
-
Doaa El-Shahat, Ahmed Tolba, Mohamed Abouhawwash, and Mohamed Abdel-Basset
- Subjects
Solar radiation ,Deep learning ,Machine learning ,Hybrid models ,XAI ,Computer engineering. Computer hardware ,TK7885-7895 ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract In late 2023, the United Nations conference on climate change (COP28), which was held in Dubai, encouraged a quick move from fossil fuels to renewable energy. Solar energy is one of the most promising forms of energy that is both sustainable and renewable. Generally, photovoltaic systems transform solar irradiance into electricity. Unfortunately, instability and intermittency in solar radiation can lead to interruptions in electricity production. The accurate forecasting of solar irradiance guarantees sustainable power production even when solar irradiance is not present. Batteries can store solar energy to be used during periods of solar absence. Additionally, deterministic models take into account the specification of technical PV systems and may be not accurate for low solar irradiance. This paper presents a comparative study for the most common Deep Learning (DL) and Machine Learning (ML) algorithms employed for short-term solar irradiance forecasting. The dataset was gathered in Islamabad during a five-year period, from 2015 to 2019, at hourly intervals with accurate meteorological sensors. Furthermore, the Grid Search Cross Validation (GSCV) with five folds is introduced to ML and DL models for optimizing the hyperparameters of these models. Several performance metrics are used to assess the algorithms, such as the Adjusted R 2 score, Normalized Root Mean Square Error (NRMSE), Mean Absolute Deviation (MAD), Mean Absolute Error (MAE) and Mean Square Error (MSE). The statistical analysis shows that CNN-LSTM outperforms its counterparts of nine well-known DL models with Adjusted R 2 score value of 0.984. For ML algorithms, gradient boosting regression is an effective forecasting method with Adjusted R 2 score value of 0.962, beating its rivals of six ML models. Furthermore, SHAP and LIME are examples of explainable Artificial Intelligence (XAI) utilized for understanding the reasons behind the obtained results.
- Published
- 2024
- Full Text
- View/download PDF
22. Physics-informed transfer learning model for fatigue life prediction of IN718 alloy
- Author
-
Baihan Chen, Jianfeng Zhang, Shangcheng Zhou, Guangping Zhang, and Fang Xu
- Subjects
Fatigue life prediction ,Transfer learning ,Physical information ,Hybrid models ,Mining engineering. Metallurgy ,TN1-997 - Abstract
To address the challenges posed by inadequate data and data utilization in multiple scenarios of fatigue loading, a Physics-informed Transfer Learning (PITL) model has been developed to predict the fatigue life of IN718 superalloy. Strain-controlled low-cycle fatigue tests were carried out at 400 °C with three distinct strain ratios, which were subsequently segmented for individual transfer learning tests. PITL models with significant engineering value were built by integrating transfer learning methodologies rooted in TrAdaBoost with a physics-based model that hinges on the principles of equivalent strain theory. The findings suggest that PITL models exhibit improved accuracy and greater robustness compared to both transfer learning and physics models.
- Published
- 2024
- Full Text
- View/download PDF
23. AMBER: A Modular Model for Tumor Growth, Vasculature and Radiation Response.
- Author
-
Kunz, Louis V., Bosque, Jesús J., Nikmaneshi, Mohammad, Chamseddine, Ibrahim, Munn, Lance L., Schuemann, Jan, Paganetti, Harald, and Bertolet, Alejandro
- Abstract
Computational models of tumor growth are valuable for simulating the dynamics of cancer progression and treatment responses. In particular, agent-based models (ABMs) tracking individual agents and their interactions are useful for their flexibility and ability to model complex behaviors. However, ABMs have often been confined to small domains or, when scaled up, have neglected crucial aspects like vasculature. Additionally, the integration into tumor ABMs of precise radiation dose calculations using gold-standard Monte Carlo (MC) methods, crucial in contemporary radiotherapy, has been lacking. Here, we introduce AMBER, an Agent-based fraMework for radioBiological Effects in Radiotherapy that computationally models tumor growth and radiation responses. AMBER is based on a voxelized geometry, enabling realistic simulations at relevant pre-clinical scales by tracking temporally discrete states stepwise. Its hybrid approach, combining traditional ABM techniques with continuous spatiotemporal fields of key microenvironmental factors such as oxygen and vascular endothelial growth factor, facilitates the generation of realistic tortuous vascular trees. Moreover, AMBER is integrated with TOPAS, an MC-based particle transport algorithm that simulates heterogeneous radiation doses. The impact of radiation on tumor dynamics considers the microenvironmental factors that alter radiosensitivity, such as oxygen availability, providing a full coupling between the biological and physical aspects. Our results show that simulations with AMBER yield accurate tumor evolution and radiation treatment outcomes, consistent with established volumetric growth laws and radiobiological understanding. Thus, AMBER emerges as a promising tool for replicating essential features of tumor growth and radiation response, offering a modular design for future expansions to incorporate specific biological traits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. On the phenomenological modelling of physical phenomena
- Author
-
Jüri Engelbrecht, Kert Tamm, and Tanel Peets
- Subjects
science-driven models ,internal variables ,phenomenological variables ,hybrid models ,mathematical modelling ,Science - Abstract
Mathematical modelling of physical phenomena is based on the laws of physics, but for complicated processes, phenomenological models could enhance the descriptive and prescriptive power of the analysis. This paper describes some hybrid models, where in addition to the physics-driven part, some phenomenological variables (based on observations) are added. The internal variables widely used in continuum mechanics for modelling dissipative processes and the phenomenological variables used in modelling neural impulses are described and compared. The appendices describe two models of neural impulses and test problems for two classical cases: the wave equation and the diffusion equation. These test problems demonstrate the usage of phenomenological variables for describing dissipation as well as amplification.
- Published
- 2024
- Full Text
- View/download PDF
25. Inferring current and Last Glacial Maximum distributions are improved by physiology‐relevant climatic variables in cold‐adapted ectotherms.
- Author
-
Guillon, Michaël, Martínez‐Freiría, Fernando, Lucchini, Nahla, Ursenbacher, Sylvain, Surget‐Groba, Yann, Kageyama, Masa, Lagarde, Frédéric, Cubizolle, Hervé, and Lourdais, Olivier
- Subjects
- *
LAST Glacial Maximum , *PHYLOGEOGRAPHY , *COLONIZATION (Ecology) , *VIVIPAROUS lizard , *ECOLOGICAL models , *COLD-blooded animals , *SOLAR temperature - Abstract
Aim: Ecological niche‐based models (ENM) frequently rely on bioclimatic variables (BioV) to reconstruct biogeographic scenarios for species evolution, ignoring mechanistic relations. We tested if climatic predictors relevant to species hydric and thermal physiology better proximate distribution patterns and support location of Pleistocene refugia derived from phylogeographic studies. Location: The Western Palaearctic. Taxon: Vipera berus and Zootoca vivipara, two cold‐adapted species. Methods: We used two sets of variables, that is physiologically meaningful climatic variables (PMV) and BioV, in a multi‐algorithm ENM approach, to compare their ability to predict current and Last Glacial Maximum (LGM) species ranges. We estimated current and LGM permafrost extent to address spatially the cold hardiness dissimilarity between both species. Results: PMV explained more accurately the current distribution of these two cold‐adapted species and identified the importance of summer temperature and solar radiation that constrain activity in cold habitats. PMV also provide a better insight than BioV predictors on LGM distribution. By including notably, the permafrost extent, PMV‐based models gave parsimonious putative arrangement and validity of refugia for each clade and subclade in accordance with phylogeographic data. Northern refugia were also identified from 48 to 52° N for V. berus and from 50 to 54° N for Z. vivipara. Main Conclusions: Our hybrid approach based on PMV generated more realistic predictions for both current (biogeographical validation) and past distributions (phylogeographic validation). By combining constraints during the activity period (summer climatic niche) and those inherent to the wintering period (freeze tolerance), we managed to identify glacial refuges in agreement with phylogeographic hypotheses concerning post‐glacial routes and colonization scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Optimization of Support Vector Machine with Biological Heuristic Algorithms for Estimation of Daily Reference Evapotranspiration Using Limited Meteorological Data in China.
- Author
-
Guo, Hongtao, Wu, Liance, Wang, Xianlong, Xing, Xuguang, Zhang, Jing, Qing, Shunhao, and Zhao, Xinbo
- Subjects
- *
METAHEURISTIC algorithms , *WATER management , *PARTICLE swarm optimization , *SUPPORT vector machines , *CLIMATIC zones - Abstract
Precise estimation of daily reference crop evapotranspiration (ET0) is critical for water resource management and agricultural irrigation optimization worldwide. In China, diverse climatic zones pose challenges for accurate ET0 prediction. Here, we evaluate the performance of a support vector machine (SVM) and its hybrid models, PSO-SVM and WOA-SVM, utilizing meteorological data spanning 1960–2020. Our study aims to identify a high-precision, low-input ET0 estimation tool. The findings indicate that the hybrid models, particularly WOA-SVM, demonstrated superior accuracy with R2 values ranging from 0.973 to 0.999 and RMSE values between 0.123 and 0.863 mm/d, outperforming the standalone SVM model with R2 values of 0.955 to 0.989 and RMSE values of 0.168 to 0.982 mm/d. The standalone SVM model showed relatively lower accuracy with R2 values of 0.822 to 0.887 and RMSE values of 0.381 to 1.951 mm/d. Notably, the WOA-SVM model, with R2 values of 0.990 to 0.992 and RMSE values of 0.092 to 0.160 mm/d, emerged as the top performer, showcasing the benefits of the whale optimization algorithm in enhancing SVM's predictive capabilities. The PSO-SVM model also presented improved performance, especially in the temperate continental zone (TCZ), subtropical monsoon region (SMZ), and temperate monsoon zone (TMZ), when using limited meteorological data as the input. The study concludes that the WOA-SVM model is a promising tool for high-precision daily ET0 estimation with fewer meteorological parameters across the different climatic zones of China. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. End‐Point Prediction of Converter Steelmaking Based on Main Process Data.
- Author
-
Kang, Yi, Zhao, Jun‐xue, Li, Bin, Ren, Meng‐meng, Cao, Geng, Yue, Shen, and An, Bei‐qi
- Abstract
In this article, main process data, notably time–series data such as lance position patterns, are analyzed during converter steelmaking, and methodologies in data processing and transforming are proposed. In this study, utilizing both the transformed key time–series and primary static process data, the influence of various process parameters on the end‐point parameters of converter steelmaking is analyzed. Furthermore, it establishes predictive models for the end‐point content of carbon (C) and phosphorus (P), as well as the end‐point temperature. In the findings, it is indicated that the end‐point carbon content and temperature are primarily influenced by the oxygen flow pattern, lime addition pattern, and key smelting parameters. The end‐point phosphorus content is mainly affected by the oxygen flow pattern, limestone addition pattern, and dolomite addition pattern. Regarding the prediction of end‐point carbon and phosphorus content, and end‐point temperature, compared to seven sub‐models, the hybrid model demonstrates an average accuracy improvement of 37.88%, 25.03%, and 31.51%, respectively, and the end‐point hit rate improves by 18.77%, 19.59%, and 20.41%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Clustering techniques performance comparison for predicting the battery state of charge: A hybrid model approach.
- Author
-
Ordás, María Teresa, Blanco, David Yeregui Marcos del, Aveleira-Mata, José, Zayas-Gato, Francisco, Jove, Esteban, Casteleiro-Roca, José-Luis, Quintián, Héctor, Calvo-Rolle, José Luis, and Alaiz-Moreton, Héctor
- Subjects
VOLTAGE ,ELECTRIC currents ,HOUSEHOLD electronics ,STORAGE batteries ,RENEWABLE energy sources - Abstract
Batteries are a fundamental storage component due to its various applications in mobility, renewable energies and consumer electronics among others. Regardless of the battery typology, one key variable from a user's perspective is the remaining energy in the battery. It is usually presented as the percentage of remaining energy compared to the total energy that can be stored and is labeled State Of Charge (SOC). This work addresses the development of a hybrid model based on a Lithium Iron Phosphate (LiFePO4) power cell, due to its broad implementation. The proposed model calculates the SOC, by means of voltage and electric current as inputs and the latter as the output. Therefore, four models based on k-Means, Agglomerative Clustering, Gaussian Mixture and Spectral Clustering techniques have been tested in order to obtain an optimal solution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Optimization Techniques in Municipal Solid Waste Management: A Systematic Review.
- Author
-
Alshaikh, Ryan and Abdelfatah, Akmal
- Abstract
As a consequence of human activity, waste generation is unavoidable, and its volume and complexity escalate with urbanization, economic progress, and the elevation of living standards in cities. Annually, the world produces about 2.01 billion tons of municipal solid waste, which often lacks environmentally safe management. The importance of solid waste management lies in its role in sustainable development, aimed at reducing the environmental harms from waste creation and disposal. With the expansion of urban populations, waste management systems grow increasingly complex, necessitating more sophisticated optimization strategies. This analysis thoroughly examines the optimization techniques used in solid waste management, assessing their application, benefits, and limitations by using PRISMA 2020. This study, reviewing the literature from 2010 to 2023, divides these techniques into three key areas: waste collection and transportation, waste treatment and disposal, and resource recovery, using tools like mathematical modeling, simulation, and artificial intelligence. It evaluates these strategies against criteria such as cost-efficiency, environmental footprint, energy usage, and social acceptability. Significant progress has been noted in optimizing waste collection and transportation through innovations in routing, bin placement, and the scheduling of vehicles. The paper also explores advancements in waste treatment and disposal, like selecting landfill sites and converting waste to energy, alongside newer methods for resource recovery, including sorting and recycling materials. In conclusion, this review identifies research gaps and suggests directions for future optimization efforts in solid waste management, emphasizing the need for cross-disciplinary collaboration, leveraging new technologies, and adopting tailored approaches to tackle the intricate challenges of managing waste. These insights offer valuable guidance for policymakers, waste management professionals, and researchers involved in crafting sustainable waste strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Fuzzy Cognitive Maps for Analyzing User Satisfaction in Information Services.
- Author
-
Girija, D. K., N., Yogeesh, and M., Rashmi
- Abstract
In this paper, we develop a Fuzzy Cognitive Map (FCM)-based framework for investigating user satisfaction in information services and further focus on how various affecting factors are connected to the overall user service experience. Conventional approaches usually perform poorly with extensive uncertainties or difficult, complex relationships between service quality, response time, usability and personalization. As a case in point, FCMs give us an appropriate mathematical framework to model causal relationships and simulate dynamic interplays between these factors. In the study, critical factors are taken as nodes in FCM and then it establishes causal weights between these so that their importance can be quantified. The state of every concept is then updated via matrix-vector operation and iterative updates with sigmoid activation function until convergence. A comprehensive case study illustrates an actual usage of the FCM framework, and highlights its ability to isolate key drivers that impact satisfaction and suggests avenues for improvement. Next steps include integrating IoT sensors for real-time monitoring, hybrid models with machine learning to improve predictions and relevant applications in different fields which goes from e-commerce to healthcare. This experimentation underscores FCMs potential in decision-making procedures which presents good insight towards enhancing users experience when dealing with information service. [ABSTRACT FROM AUTHOR]
- Published
- 2024
31. Blood Glucose Prediction from Nutrition Analytics in Type 1 Diabetes: A Review.
- Author
-
Lubasinski, Nicole, Thabit, Hood, Nutter, Paul W., and Harper, Simon
- Abstract
Introduction: Type 1 Diabetes (T1D) affects over 9 million worldwide and necessitates meticulous self-management for blood glucose (BG) control. Utilizing BG prediction technology allows for increased BG control and a reduction in the diabetes burden caused by self-management requirements. This paper reviews BG prediction models in T1D, which include nutritional components. Method: A systematic search, utilizing the PRISMA guidelines, identified articles focusing on BG prediction algorithms for T1D that incorporate nutritional variables. Eligible studies were screened and analyzed for model type, inclusion of additional aspects in the model, prediction horizon, patient population, inputs, and accuracy. Results: The study categorizes 138 blood glucose prediction models into data-driven (54%), physiological (14%), and hybrid (33%) types. Prediction horizons of ≤30 min are used in 36% of models, 31–60 min in 34%, 61–90 min in 11%, 91–120 min in 10%, and >120 min in 9%. Neural networks are the most used data-driven technique (47%), and simple carbohydrate intake is commonly included in models (data-driven: 72%, physiological: 52%, hybrid: 67%). Real or free-living data are predominantly used (83%). Conclusion: The primary goal of blood glucose prediction in T1D is to enable informed decisions and maintain safe BG levels, considering the impact of all nutrients for meal planning and clinical relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. LuckyMera: a modular AI framework for building hybrid NetHack agents.
- Author
-
Quarantiello, Luigi, Marzeddu, Simone, Guzzi, Antonio, and Lomonaco, Vincenzo
- Subjects
- *
ARTIFICIAL intelligence , *REINFORCEMENT learning , *COMPUTATIONAL complexity , *BLENDED learning , *VIDEO games - Abstract
In the last few decades we have witnessed a significant development in Artificial Intelligence (AI) thanks to the availability of a variety of testbeds, mostly based on simulated environments and video games. Among those, roguelike games offer a very good trade-off in terms of complexity of the environment and computational costs, which makes them perfectly suited to test AI agents generalization capabilities. In this work, we present LuckyMera, a flexible, modular, extensible and configurable AI framework built around NetHack, a popular terminal-based, single-player roguelike video game. This library is aimed at simplifying and speeding up the development of AI agents capable of successfully playing the game and offering a high-level interface for designing game strategies. LuckyMera comes with a set of off-the-shelf symbolic and neural modules (called "skills"): these modules can be either hard-coded behaviors, or neural Reinforcement Learning approaches, with the possibility of creating compositional hybrid solutions. Additionally, LuckyMera comes with a set of utility features to save its experiences in the form of trajectories for further analysis and to use them as datasets to train neural modules, with a direct interface to the NetHack Learning Environment and MiniHack. Through an empirical evaluation we validate our skills implementation and propose a strong baseline agent that can reach state-of-the-art performances in the complete NetHack game. LuckyMera is open-source and available at . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. УЗАГАЛЬНЕНА МОДЕЛЬ ТЛІЮЧОГО РОЗРЯДУ НА ОСНОВІ ТРИГОНОМЕТРИЧНОГО БАЗИСА
- Author
-
О. В., АНДРІЄНКО
- Subjects
- *
GLOW discharges , *ORTHONORMAL basis , *MONTE Carlo method , *HYDRODYNAMICS , *ELECTRODE testing - Abstract
Purpose. To review and classify currently existing glow discharge models by the method of mathematical description of processes, pressure, gas type and electrode geometry. The article is aimed at creating a generalized model of the discharge, which will determine the influence of each parameter on the characteristics of the discharge. Methodology. To achieve the goals of the research, methods of theoretical analysis of scientific sources were used, as well as a mathematical method for describing a generalized discharge model based on an orthonormal basis, in particular a trigonometric basis. Findings. An overview of the glow discharge was conducted, a classification of models based on analytical methods in their basis was proposed. The vector of parameters common to all models is isolated. Also, during the analysis, a comparison of the characteristics of the models was carried out in the form of a table. A generalized model for the study of gas discharges by the method of modeling using orthonormal bases was proposed. As an example, the formation of a vector of variable parameters in the trigonometric basis is given. Originality. The article offers a generalized discharge model that demonstrates how a change in a specific parameter affects the discharge characteristics. Such a model allows you to simultaneously change the parameters of the model, accumulate reactions to changes in parameters in each test, and upon completion, based on the results of the analysis, isolate the effect of each parameter with a cumulative change in the parameters of the model. Practical value: The model allows analyzing the influence of various parameters on the discharge regardless of the specific conditions of the experiment. The use of an orthonormal basis for the representation of parameters makes it possible to identify which parameters have the greatest influence on the stability and efficiency of the discharge both individually and in combination with other parameters. This makes it possible to optimize these parameters to achieve the best results in specific conditions, making it a universal tool for researchers and engineers, which facilitates the analysis of the relationships between parameters and allows you to get a more complete picture of the system's behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. On the phenomenological modelling of physical phenomena.
- Author
-
Engelbrecht, Jüri, Tamm, Kert, and Peets, Tanel
- Subjects
- *
HEAT equation , *PHYSICAL laws , *MATHEMATICAL variables , *PHENOMENOLOGICAL theory (Physics) , *CONTINUUM mechanics - Abstract
Mathematical modelling of physical phenomena is based on the laws of physics, but for complicated processes, phenomenological models could enhance the descriptive and prescriptive power of the analysis. This paper describes some hybrid models, where in addition to the physics-driven part, some phenomenological variables (based on observations) are added. The internal variables widely used in continuum mechanics for modelling dissipative processes and the phenomenological variables used in modelling neural impulses are described and compared. The appendices describe two models of neural impulses and test problems for two classical cases: the wave equation and the diffusion equation. These test problems demonstrate the usage of phenomenological variables for describing dissipation as well as amplification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Groundwater spring potential mapping: Assessment the contribution of hydrogeological factors.
- Author
-
Zhao, Rui, Fan, Chenchen, Arabameri, Alireza, Santosh, M, Mohammad, Lal, and Mondal, Ismail
- Subjects
- *
HYDROGEOLOGY , *GROUNDWATER , *MACHINE learning , *WATER springs , *RAINFALL , *TOPOGRAPHY - Abstract
Groundwater, a fundamental asset, isn't effectively accessible in some parts of the world. The current research work pointed toward obtaining precise maps of potential groundwater zones. This study aimed for potential groundwater modeling and extracting the precise maps using four new advanced hybrid ML models (Dagging-HP, Bagging-HP, AdaBoost-HP, Decorate-HP) and one single model Hyperpipes (HP) in the Doji Watershed, situated in the eastern part of Golestan province, Iran. Among the selected models, the AdaBoost-HP model is the most efficient, with an AUC - ROC of 0.972, accuracy (0.922), sensitivity (0.906), and specificity (0.938), which gives the most promising values, when determining the collinearity between the 14 training factors, which are, in descending order of significance, LULC, Distance to stream (DtS), Topography wetness index (TWI), HAND, Distance to the road (DtR), Geomorphology, Topography position index (TPI), Lithology, Drainage density (DD), Elevation, Slope, Rainfall, and Clay (%). The AUC-ROC approach was employed to assess the model's performance along with Accuracy, Specificity, and Sensitivity. This model revealed that 7.37% has very high groundwater potential in the eastern and south-western parts of the study, whereas 36.8% has a very low groundwater potential in the north-western and south-eastern parts of the study. It can be said from this assessment that results obtained from this investigation are better and more reliable, which gives essential encouragement for further study put on this method for groundwater potential mapping of other areas of the world along with other areas of hydrogeological investigations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Growing degree‐days do not explain moth species' distributions at broad scales.
- Author
-
Keefe, Hannah E. and Kharouba, Heather M.
- Subjects
SPECIES distribution ,GROWING season ,CLIMATE change ,MOTHS ,PREDICTION models - Abstract
Growing degree‐days (GDD), an estimate of an organism's growing season length, has been shown to be an important predictor of Lepidopteran species' distributions and could be influencing Lepidopteran range shifts to climate change. Yet, one understudied simplification in this literature is that the same thermal threshold is used in the calculations of GDD for all species instead of a species‐specific threshold. By characterizing the phenological process influenced by climate, a species‐specific estimate of GDD should improve the accuracy of species distribution models (SDMs). To test this hypothesis, we used published, experimentally estimated thermal thresholds and modeled the current geographic distribution of 30 moth species native to North America. We found that the predictive performance of models based on a species‐specific estimate of GDD was indistinguishable from models based on a standard estimate of GDD. This is likely because GDD was not an important predictor of these species' distributions. Our findings suggest that experimentally estimated thermal thresholds may not always scale up to be predictive at broad scales and that more work is needed to leverage the data from lab experiments into SDMs to accurately predict species' range shifts in response to climate change. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Artificial Intelligence for Water Consumption Assessment: State of the Art Review.
- Author
-
Morain, Almando, Ilangovan, Nivedita, Delhom, Christopher, and Anandhi, Aavudai
- Subjects
WATER consumption ,ARTIFICIAL intelligence ,SNOWBALL sampling ,TIME perspective ,RESEARCH personnel ,MACHINE learning ,KNOWLEDGE gap theory - Abstract
In recent decades, demand for freshwater resources has increased the risk of severe water stress. With the growing prevalence of artificial intelligence (AI), many researchers have turned to it as an alternative to linear methods to assess water consumption (WC). Using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, this study utilized 229 screened publications identified through database searches and snowball sampling. This study introduces novel aspects of AI's role in water consumption assessment by focusing on innovation, application sectors, sustainability, and machine learning applications. It also categorizes existing models, such as standalone and hybrid, based on input, output variables, and time horizons. Additionally, it classifies learnable parameters and performance indexes while discussing AI models' advantages, disadvantages, and challenges. The study translates this information into a guide for selecting AI models for WC assessment. As no one-size-fits-all AI model exists, this study suggests utilizing hybrid AI models as alternatives. These models offer flexibility regarding efficiency, accuracy, interpretability, adaptability, and data requirements. They can address the limitations of individual models, leverage the strengths of different approaches, and provide a better understanding of the relationships between variables. Several knowledge gaps were identified, resulting in suggestions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Hybrid deep learning models for time series forecasting of solar power.
- Author
-
Salman, Diaa, Direkoglu, Cem, Kusaf, Mehmet, and Fahrioglu, Murat
- Subjects
- *
CONVOLUTIONAL neural networks , *TIME series analysis , *DEEP learning , *SOLAR energy , *FORECASTING , *RENEWABLE energy sources , *TRANSFORMER models - Abstract
Forecasting solar power production accurately is critical for effectively planning and managing renewable energy systems. This paper introduces and investigates novel hybrid deep learning models for solar power forecasting using time series data. The research analyzes the efficacy of various models for capturing the complex patterns present in solar power data. In this study, all of the possible combinations of convolutional neural network (CNN), long short-term memory (LSTM), and transformer (TF) models are experimented. These hybrid models also compared with the single CNN, LSTM and TF models with respect to different kinds of optimizers. Three different evaluation metrics are also employed for performance analysis. Results show that the CNN–LSTM–TF hybrid model outperforms the other models, with a mean absolute error (MAE) of 0.551% when using the Nadam optimizer. However, the TF–LSTM model has relatively low performance, with an MAE of 16.17%, highlighting the difficulties in making reliable predictions of solar power. This result provides valuable insights for optimizing and planning renewable energy systems, highlighting the significance of selecting appropriate models and optimizers for accurate solar power forecasting. This is the first time such a comprehensive work presented that also involves transformer networks in hybrid models for solar power forecasting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. On Predictive Planning and Counterfactual Learning in Active Inference.
- Author
-
Paul, Aswin, Isomura, Takuya, and Razi, Adeel
- Subjects
- *
ARTIFICIAL intelligence , *COUNTERFACTUALS (Logic) , *DECISION making - Abstract
Given the rapid advancement of artificial intelligence, understanding the foundations of intelligent behaviour is increasingly important. Active inference, regarded as a general theory of behaviour, offers a principled approach to probing the basis of sophistication in planning and decision-making. This paper examines two decision-making schemes in active inference based on "planning" and "learning from experience". Furthermore, we also introduce a mixed model that navigates the data complexity trade-off between these strategies, leveraging the strengths of both to facilitate balanced decision-making. We evaluate our proposed model in a challenging grid-world scenario that requires adaptability from the agent. Additionally, our model provides the opportunity to analyse the evolution of various parameters, offering valuable insights and contributing to an explainable framework for intelligent decision-making. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Hybrid neuro fuzzy inference systems for simulating catchment sediment yield.
- Author
-
Sedighkia, Mahdi, Jahanshahloo, Manizheh, and Datta, Bithin
- Abstract
Increasing sediment yield is one of the important environmental challenges in river basins resulting from changing land use. The current study develops an adaptive neuro fuzzy inference system (ANFIS) hybridized with evolutionary algorithms to predict annual sediment yield at the catchment scale considering some key factors affecting the alteration of the sediment yield. The key factors consist of the area of the sub-catchments, average slope of the sub-catchments, rainfall, and forest index, and the output of the model is sediment yield. Several indices such as the Nash–Sutcliffe efficiency (NSE), root mean square error and vulnerability index (VI) were applied to evaluate the performance of the models. Moreover, hybrid models were compared in terms of complexities to select the best approach. Based on the results in Talar River basin in Iran, several hybrid models in which particle swarm optimization (PSO), genetic algorithm, invasive weed optimization, biogeography-based optimization, and shuffled complex evolution used to train the neuro fuzzy network are able to generate reliable sediment yield models. The NSE of all previously listed models is more than 0.8 which means they are robust for assessing sediment yield resulting from land use change with a focus on deforestation. The proposed models are fairly similar in terms of computational complexities which implies no priority for selecting the best model. However, PSO-ANFIS performed slightly better than the other models especially in terms of accuracy of the outputs due to a high NSE (0.92) and a low VI (1.9 Mg/ha). Using the proposed models is recommended due to the lower required time and data compared to a physically based models such as the The Soil and Water Assessment Tool. However, some drawbacks restrict the application of the proposed model. For example, the proposed models cannot be used for small temporal scales. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. A Review of Time-Series Forecasting Algorithms for Industrial Manufacturing Systems.
- Author
-
Fatima, Syeda Sitara Wishal and Rahimi, Afshin
- Subjects
INDUSTRIALISM ,ARTIFICIAL neural networks ,BOX-Jenkins forecasting ,MANUFACTURING processes ,GENERATIVE adversarial networks - Abstract
Time-series forecasting is crucial in the efficient operation and decision-making processes of various industrial systems. Accurately predicting future trends is essential for optimizing resources, production scheduling, and overall system performance. This comprehensive review examines time-series forecasting models and their applications across diverse industries. We discuss the fundamental principles, strengths, and weaknesses of traditional statistical methods such as Autoregressive Integrated Moving Average (ARIMA) and Exponential Smoothing (ES), which are widely used due to their simplicity and interpretability. However, these models often struggle with the complex, non-linear, and high-dimensional data commonly found in industrial systems. To address these challenges, we explore Machine Learning techniques, including Support Vector Machine (SVM) and Artificial Neural Network (ANN). These models offer more flexibility and adaptability, often outperforming traditional statistical methods. Furthermore, we investigate the potential of hybrid models, which combine the strengths of different methods to achieve improved prediction performance. These hybrid models result in more accurate and robust forecasts. Finally, we discuss the potential of newly developed generative models such as Generative Adversarial Network (GAN) for time-series forecasting. This review emphasizes the importance of carefully selecting the appropriate model based on specific industry requirements, data characteristics, and forecasting objectives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. From Time-Series to Hybrid Models: Advancements in Short-Term Load Forecasting Embracing Smart Grid Paradigm.
- Author
-
Ali, Salman, Bogarra, Santiago, Riaz, Muhammad Naveed, Phyo, Pyae Pyae, Flynn, David, and Taha, Ahmad
- Subjects
MACHINE learning ,POWER resources management ,HEURISTIC ,PREDICTION models ,POWER resources ,FORECASTING ,SMART meters - Abstract
This review paper is a foundational resource for power distribution and management decisions, thoroughly examining short-term load forecasting (STLF) models within power systems. The study categorizes these models into three groups: statistical approaches, intelligent-computing-based methods, and hybrid models. Performance indicators are compared, revealing the superiority of heuristic search and population-based optimization learning algorithms integrated with artificial neural networks (ANNs) for STLF. However, challenges persist in ANN models, particularly in weight initialization and susceptibility to local minima. The investigation underscores the necessity for sophisticated predictive models to enhance forecasting accuracy, advocating for the efficacy of hybrid models incorporating multiple predictive approaches. Acknowledging the changing landscape, the focus shifts to STLF in smart grids, exploring the transformative potential of advanced power networks. Smart measurement devices and storage systems are pivotal in boosting STLF accuracy, enabling more efficient energy management and resource allocation in evolving smart grid technologies. In summary, this review provides a comprehensive analysis of contemporary predictive models and suggests that ANNs and hybrid models could be the most suitable methods to attain reliable and accurate STLF. However, further research is required, including considerations of network complexity, improved training techniques, convergence rates, and highly correlated inputs to enhance STLF model performance in modern power systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Predicting carbon and oil price returns using hybrid models based on machine and deep learning.
- Author
-
Molina‐Muñoz, Jesús, Mora‐Valencia, Andrés, and Perote, Javier
- Subjects
DEEP learning ,CARBON pricing ,ARTIFICIAL neural networks ,PETROLEUM sales & prices ,MACHINE learning - Abstract
Summary: Predicting carbon and oil prices is recently gaining relevance in the climate change literature. This is due to the fact that conventional energy market analysis and the design of mechanisms for climate change mitigation constitute key variables for artificial carbon markets. Yet, modelling non‐linear effects in time series remains a major challenge for carbon and oil price forecasting. Hence, hybrid models seem to be appealing alternatives for this purpose. This study evaluates the performance of 12 hybrid models, which weigh results from random forest, support vector machine, autoregressive integrated moving average and the non‐linear autoregressive neural network models. The weights are determined by (i) assuming equal weights, (ii) using a neural network to optimise individual weights and (iii) employing deep learning techniques. The findings of our work confirm the salient characteristics of modelling the non‐linear effects of time series and the potential of hybrid models based on neural networks and deep learning in predicting carbon and oil price returns. Furthermore, the best results are obtained from hybrid models that combine machine learning and traditional econometric techniques as inputs, which capture the linear and non‐linear effects of time series. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Exploring Multi-Task Learning for Forecasting Energy-Cost Resource Allocation in IoT-Cloud Systems.
- Author
-
Aldossary, Mohammad, Alharbi, Hatem A., and Ayub, Nasir
- Subjects
METAHEURISTIC algorithms ,SERVER farms (Computer network management) ,COMPUTER systems ,RESOURCE allocation ,ENERGY consumption ,SUPPORT vector machines ,DATA warehousing - Abstract
Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure, thereby revolutionizing computer processes. However, the rising energy consumption in cloud centers poses a significant challenge, especially with the escalating energy costs. This paper tackles this issue by introducing efficient solutions for data placement and node management, with a clear emphasis on the crucial role of the Internet of Things (IoT) throughout the research process. The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers. These sensors continuously monitor vital parameters such as energy usage and temperature, thereby providing a comprehensive dataset for analysis. The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat (NGT) model, enabling a dynamic and accurate representation of the current state of the data center environment. Through the incorporation of the Seagull Optimization Algorithm (SOA), the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors. The model is trained using 80% of the available dataset and subsequently tested on the remaining 20%. The results demonstrate the effectiveness of the proposed approach, with a Mean Squared Error (MSE) of 5.33% and a Mean Absolute Error (MAE) of 2.83%, accurately estimating power prices and leading to an average reduction of 23.88% in power costs. Furthermore, the integration of IoT data significantly enhances the accuracy of the NGT model, outperforming benchmark algorithms such as DenseNet, Support Vector Machine (SVM), Decision Trees, and AlexNet. The NGT model achieves an impressive accuracy rate of 97.9%, surpassing the rates of 87%, 83%, 80%, and 79%, respectively, for the benchmark algorithms. These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems. The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Comparative Analysis of LSTM, ARIMA, and Hybrid Models for Forecasting Future GDP.
- Author
-
Hamiane, Sana, Ghanou, Youssef, Khalifi, Hamid, and Telmem, Meryam
- Subjects
BOX-Jenkins forecasting ,STANDARD deviations ,ECONOMIC forecasting ,ECONOMIC indicators ,GROSS domestic product ,FORECASTING - Abstract
Gross domestic product (GDP) is an effective indicator of economic development, and GDP forecasts provide a better understanding of future economic trends. This article investigates three methods of forecasting GDP: LSTM, ARIMA and a hybrid approach that combines two models. The principal aim is to compare the performance of these methods by computing mean square error (MSE), Root Mean Square Error (RMSE), Mean Average Error (MAE), the coefficient of determination (R2) and to determine which model provides the most accurate and reliable forecasts. The study collected quarterly GDP data from the Federal Reserve Economic Data, covering a period of 75 years from 1947 to 2022. The LSTM model, using the HE initialization technique to initialize the weights, was trained and tested using the GDP data. the ARIMA model was configured with parameters (1,2,1), and the hybrid (ARIMA-LSTM) model combined the strengths of both approaches. It was found that LSTM (MSE=0.010, RMSE=0.104, MAE=0.077, R2=0.96), ARIMA (MSE=0.095, RMSE=0.309, MAE=0.286, R2=0.75) and Hybrid (MSE=0.0018, RMSE=0.043, MAE=0.028, R2=0.99) and the hybrid model achieves better prediction accuracy than the individual models in predicting GDP. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. An interpretable hybrid framework combining convolution latent vectors with transformer based attention mechanism for rolling element fault detection and classification
- Author
-
Ali Saeed, M. Usman Akram, Muazzam Khattak, and M. Belal Khan
- Subjects
Intelligent fault diagnosis ,Deep learning ,CNN and transformer ,Fault classification ,Hybrid models ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
Failure of industrial assets can cause financial, operational and safety hazards across different industries. Monitoring their condition is crucial for successful and smooth operations. The colossal volume of sensory data generated and acquired throughout industrial operations supports real-time condition monitoring of these assets. Leveraging digital technologies to analyze acquired data creates an ideal environment for applying advanced data-driven machine learning techniques, such as convolutional neural networks (CNNs) and vision transformer (ViT) to detect faults and classify, enabling accurate prediction and timely maintenance of industrial assets. In this paper, we present a novel hybrid framework based on the local feature extraction ability of CNN with comprehensive understanding of transformer within a global context. The proposed method leverages the complex weight-sharing properties of CNNs and ability of transformers to understand the larger context of spatial relationships in large-scale patterns, making it applicable to datasets of varying sizes. Preprocessing methods such as data augmentation are used to train the model on the Case Western Reserve University (CWRU) dataset in order to increase generalization through computational efficiency. An average fault classification accuracy of 99.62% is accomplished over all three fault classes with an average time-to-fault detection of 38.4 ms. MFPT fault dataset is used to further validate the method with an accuracy of 99.17% for outer race and 99.26% for inner race. Moreover, the proposed framework can be modified to accommodate alternative convolutional models.
- Published
- 2024
- Full Text
- View/download PDF
47. Bioprocess feeding optimization through in silico dynamic experiments and hybrid digital models—a proof of concept
- Author
-
Gianmarco Barberi, Christian Giacopuzzi, and Pierantonio Facco
- Subjects
cell cultures ,hybrid models ,DoDE ,feeding schedule optimization ,artificial neural networks ,Technology ,Chemical technology ,TP1-1185 - Abstract
The development of cell cultures to produce monoclonal antibodies is a multi-step, time-consuming, and labor-intensive procedure which usually lasts several years and requires heavy investment by biopharmaceutical companies. One key aspect of process optimization is improving the feeding strategy. This step is typically performed though design of experiments (DoE) during process development, in such a way as to identify the optimal combinations of factors which maximize the productivity of the cell cultures. However, DoE is not suitable for time-varying factor profiles because it requires a large number of experimental runs which can last several weeks and cost tens of thousands of dollars. We here suggest a methodology to optimize the feeding schedule of mammalian cell cultures by virtualizing part of the experimental campaign on a hybrid digital model of the process to accelerate experimentation and reduce experimental burden. The proposed methodology couples design of dynamic experiments (DoDE) with a hybrid semi-parametric digital model. In particular, DoDE is used to design optimal experiments with time-varying factor profiles, whose experimental data are then utilized to train the hybrid model. This will identify the optimal time profiles of glucose and glutamine for maximizing the antibody titer in the culture despite the limited number of experiments performed on the process. As a proof-of-concept, the proposed methodology is applied on a simulated process to produce monoclonal antibodies at a 1-L shake flask scale, and the results are compared with an experimental campaign based on DoDE and response surface modeling. The hybrid digital model requires an extremely limited number of experiments (nine) to be accurately trained, resulting in a promising solution for performing in silico experimental campaigns. The proposed optimization strategy provides a 34.9% increase in the antibody titer with respect to the training data and a 2.8% higher antibody titer than the optimal results of two DoDE-based experimental campaigns comprising different numbers of experiments (i.e., 9 and 31), achieving a high antibody titer (3,222.8 mg/L) —very close to the real process optimum (3,228.8 mg/L).
- Published
- 2024
- Full Text
- View/download PDF
48. An improved hybrid model for shoreline change
- Author
-
Naresh Kumar Goud Lakku, Piyali Chowdhury, and Manasa Ranjan Behera
- Subjects
shoreline shift modeling ,hybrid models ,depth of closure ,coastal geomorphology ,wave climate ,Science ,General. Including nature conservation, geographical distribution ,QH1-199.5 - Abstract
Predicting the nearshore sediment transport and shifts in coastlines in view of climate change is important for planning and management of coastal infrastructure and requires an accurate prediction of the regional wave climate as well as an in-depth understanding of the complex morphology surrounding the area of interest. Recently, hybrid shoreline evolution models are being used to inform coastal management. These models typically apply the one-line theory to estimate changes in shoreline morphology based on littoral drift gradients calculated from a 2DH coupled wave, flow, and sediment transport model. As per the one-line theory, the calculated littoral drift is uniformly distributed over the active coastal profile. A key challenge facing the application of hybrid models is that they fail to consider complex morphologies when updating the shorelines for several scenarios. This is mainly due to the scarcity of field datasets on beach behavior and nearshore morphological change that extends up to the local depth of closure, leading to assumptions in this value in overall shoreline shift predictions. In this study, we propose an improved hybrid model for shoreline shift predictions in an open sandy beach system impacted by human interventions and changes in wave climate. Three main conclusions are derived from this study. First, the optimal boundary conditions for modeling shoreline evolution need to vary according to local coastal geomorphology and processes. Second, specifying boundary conditions within physically realistic ranges does not guarantee reliable shoreline evolution predictions. Third, hybrid 2D/one-line models have limited applicability in simple planform morphologies where the active beach profile is subject to direct impacts due to wave action and/or human interventions, plausibly due to the one-line theory assumption of a constant time-averaged coastal profile. These findings provide insightful information into the drivers of shoreline evolution around sandy beaches, which have practical implications for advancing the shoreline evolution models.
- Published
- 2024
- Full Text
- View/download PDF
49. Developing a digital mapping of soil organic carbon on a national scale using Sentinel-2 and hybrid models at varying spatial resolutions
- Author
-
Xiande Ji, Balamuralidhar Purushothaman, R. Venkatesha Prasad, and P.V. Aravind
- Subjects
Soil organic carbon ,Sentinel-2 ,Hybrid models ,Digital soil mapping ,Spatial autocorrelation ,Germany ,Ecology ,QH540-549.5 - Abstract
Mapping the spatial distribution of soil organic carbon (SOC) is crucial for monitoring soil health, understanding ecosystem functions, and contributing to global carbon cycling. However, few studies have directly compared the influence of hybrid models and individual models with varying spatial resolutions on SOC prediction at a national scale. In this study, by combining remote sensing data, we utilized the LUCAS 2018 soil dataset to evaluate the potential capacities of hybrid models for predicting SOC content at different spatial resolutions in Germany. The hybrid models PLSRK and RFK consisted of partial least square regression (PLSR) with residual original kriging (OK) models, and random forest (RF) models with residual OK models, respectively. Individual PLSR and RF models were used as reference models. All these models were applied to estimate SOC content at 10 m, 50 m, 100 m, and 200 m spatial resolutions. Sentinel-2 bands, band indices, and topography variables were as predictors. The results revealed that hybrid models had a more accurate prediction of SOC content with higher explanations and lower prediction errors compared with individual models. The RFK model at the spatial resolution of 100 m was the fittest model with R2 = 0.416, RMSE = 0.545, and RPIQ = 1.647, which enhanced 3.74% of explanation compared with the performance of RF model. The results also showed that hybrid models at a relatively coarse resolution (100 m) had better accuracy instead of those at high spatial resolution (10 m, 50 m). Sentinel-2 remote sensing data showed significant predictive capabilities for estimating SOC content. The predicted spatial distribution of SOC content revealed that the high SOC concentrated in the northwest grassland, central and southwestern mountains, and the Alps in Germany. Our study provided a benchmark SOC map in Germany for monitoring the changes resulting from land use and climate impacts, and we illustrated the accuracy of hybrid models and the effects of spatial resolutions on SOC predictions at a national scale.
- Published
- 2024
- Full Text
- View/download PDF
50. Hybrid time series and ANN-based ELM model on JSE/FTSE closing stock prices
- Author
-
Onalenna Moseane, Johannes Tshepiso Tsoku, and Daniel Metsileng
- Subjects
artificial neural networks ,ARIMA ,extreme learning machine ,hybrid models ,stock price prediction ,Applied mathematics. Quantitative methods ,T57-57.97 ,Probabilities. Mathematical statistics ,QA273-280 - Abstract
Given the numerous factors that can influence stock prices such as a company's financial health, economic conditions, and the political climate, predicting stock prices can be quite difficult. However, the advent of the newer learning algorithm such as extreme learning machine (ELM) offers the potential to integrate ARIMA and ANN methods within a hybrid framework. This study aims to examine how hybrid time series models and an artificial neural network (ANN)-based ELM performed when analyzing daily Johannesburg Stock Exchange/Financial Times Stock Exchange (JSE/FTSE) closing stock prices over 5 years, from 15 June 2018 to 15 June 2023, encompassing 1,251 data points. The methods used in the study are autoregressive integrated moving average (ARIMA), ANN-based ELM, and a hybrid of ARIMA-ANN-based ELM. The ARIMA method was used to model linearity, while nonlinearity was modeled using an ANN-based ELM. The study further modeled both linearity and non-linearity using the hybrid ARIMA-ANN-based ELM model. The model was then compared to identify the best model for closing stock prices using error matrices. The error metrics revealed that the hybrid ARIMA-ANN-based ELM model performed better than the ARIMA [1, 6, 6] and ANN-based ELM models. It is evident from the literature that better forecasting leads to better policies in the future. Therefore, this study recommends policymakers and practitioners to use the hybrid model, as it yields better results. Furthermore, researchers may also delve into assessing the effectiveness of models by utilizing additional conventional linear models and hybrid variants such as ARIMA-generalized autoregressive conditional heteroskedasticity (GARCH) and ARIMA-EGARCH. Future studies could also integrate these with non-linear models to better capture both linear and non-linear patterns in the data.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.