3,929 results on '"Model evaluation"'
Search Results
2. Multifractal Analysis for Evaluating the Representation of Clouds in Global Kilometer‐Scale Models.
- Author
-
Freischem, Lilli J., Weiss, Philipp, Christensen, Hannah M., and Stier, Philip
- Subjects
- *
GEOSTATIONARY satellites , *ATMOSPHERIC models , *FRACTALS , *EVALUATION methodology , *EXPONENTS - Abstract
Clouds are one of the largest sources of uncertainty in climate predictions. Global km‐scale models need to simulate clouds and precipitation accurately to predict future climates. To isolate issues in their representation of clouds, models need to be thoroughly evaluated with observations. Here, we introduce multifractal analysis as a method for evaluating km‐scale simulations. We apply it to outgoing longwave radiation fields to investigate structural differences between observed and simulated anvil clouds. We compute fractal parameters which compactly characterize the scaling behavior of clouds and can be compared across simulations and observations. We use this method to evaluate the nextGEMS ICON simulations via comparison with observations from the geostationary satellite GOES‐16. We find that multifractal scaling exponents in the ICON model are significantly lower than in observations. We conclude that too much variability is contained in the small scales (<100km) $(< 100\ \mathrm{k}\mathrm{m})$ leading to less organized convection and smaller, isolated anvils. Plain Language Summary: In this paper, we present a new approach to evaluating state‐of‐the‐art high‐resolution climate models. We use a type of analysis that captures how a field like outgoing radiation varies between two points in space; it is called multifractal analysis. We apply multifractal analysis to snapshots of climate model simulations and satellite observations, and compare the results to evaluate the model. In contrast to traditional evaluation approaches, our method focuses on the evaluation of the spatio‐temporal structure of cloud fields, exploiting previously untapped information content. Hence, it can take into account the fine details in time and space that high‐resolution climate models provide. We use our method to evaluate the ICON atmospheric model. We find that the simulations does not contain enough large clusters of clouds, as found in big thunderstorms, but instead clouds are randomly distributed in space: the simulated clouds are not organized enough. Key Points: Quantifiable, structural evaluation metrics such as multifractal analysis should be used to evaluate and improve km‐scale modelsMultifractal analysis finds that deep convection in the ICON model is not organized enough leading to smaller fractal parametersThe model's bias toward smaller fractal parameters can be attributed to clouds simulated over the ocean [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Risk Factors for Musculoskeletal Disorders in Korean Farmers: Survey on Occupational Diseases in 2020 and 2022.
- Author
-
Kim, Jinheum, Youn, Kanwoo, and Park, Jinwoo
- Subjects
OCCUPATIONAL disease risk factors ,RISK assessment ,PEARSON correlation (Statistics) ,NECK ,WRIST ,OCCUPATIONAL diseases ,ARM ,RECEIVER operating characteristic curves ,ERGONOMICS ,MUSCULOSKELETAL system diseases ,MULTIPLE regression analysis ,SEX distribution ,CHI-squared test ,DESCRIPTIVE statistics ,AGE distribution ,SURVEYS ,PESTICIDES ,ODDS ratio ,ECONOMIC impact ,AGRICULTURAL laborers ,KNEE ,CONFIDENCE intervals ,AGRICULTURE ,EMPLOYEES' workload ,DISEASE risk factors - Abstract
Background/Objectives: This study investigated factors influencing the prevalence of musculoskeletal disorders (MSDs) resulting from agricultural work, utilizing the 2020 and 2022 occupational disease survey data collected by the Rural Development Administration. The combined data from these years indicated a 6.02% prevalence of MSDs, reflecting a significant class imbalance in the binary response variables. This imbalance could lead to classifiers overlooking rare events, potentially inflating accuracy assessments. Methods: We evaluated five distinct models to compare their performance using both original and synthetic data and assessing the models' performance based on synthetic data generation. In the multivariate logistic model, we focused on the main effects of the covariates as there were no statistically significant second-order interactions. Results: Focusing on the random over-sampling examples (ROSE) method, gender, age, and pesticide use were particularly impactful. The odds of experiencing MSDs were 1.29 times higher for females than males. The odds increased with age: 2.66 times higher for those aged 50–59, 4.60 times higher for those aged 60–69, and 7.16 times higher for those aged 70 or older, compared to those under 50. Pesticide use was associated with 1.26 times higher odds of developing MSDs. Among body part usage variables, all except wrists and knees were significant. Farmers who frequently used their necks, arms, and waist showed 1.27, 1.11, and 1.23 times higher odds of developing MSDs, respectively. Conclusions: The accuracy of the raw method was high, but the ROSE method outperformed it for precision and F1 score, and both methods showed similar AUC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Substantial Cold Bias During Wintertime Cold Extremes in the Southern Cascadia Region in Historical CMIP6 Simulations.
- Author
-
Rogers, M. H., Mauger, G., and Cristea, N.
- Subjects
CLIMATE change models ,WEATHER ,MOUNTAIN climate ,COLD (Temperature) ,SEA level - Abstract
Global climate models often simulate atmospheric conditions incorrectly due to their coarse grid resolution, flaws in their dynamics, and biases resulting from parameterization schemes. Here we document a bias in the magnitude and extent of minimum temperature extremes in the CMIP6 model ensemble, relative to ERA5. The bias is present in the southern Cascadia region (i.e., Pacific Northwestern United States and southwestern British Columbia, Canada, spanning from the coast to the Rocky Mountains), with some models showing a bias magnitude in excess of −10°C in the first percentile of daily winter minimum temperature. The sea level pressure pattern for these events is similar in CMIP6 models and ERA5, showing high anomalies in the Northeast Pacific that are indicative of an atmospheric blocking pattern and consequently more northerly flow. Though this atmospheric blocking pattern is typically concurrent with cold winter temperatures across much of North America, Rocky and Cascade mountain ranges prevent the cold air from reaching the southern Cascadia region as confirmed by the observations and reanalysis. Our results suggest that the bias in CMIP6 minimum temperatures is a result of unresolved topography in the Rockies and Cascade mountain ranges, such that the terrain does not adequately block cold air advection from reaching the southern Cascadia region. Plain Language Summary: Global climate models continue to struggle with recreating some of the observed behaviors of our Earth system for a variety of reasons. Here we document one such issue: daily minimum temperatures in Southwestern British Columbia and western Washington that are much colder than are observed. We find that these temperatures occur when extremely cold air is moved from the north into southwestern British Columbia and western Washington. In reality, the Rocky and Cascade mountain ranges prevent this air from reaching western Washington and southwestern British Columbia by acting as barriers. However, in the global climate models these mountain ranges are lower and less jagged, making it easier for cold air to move across them. Key Points: CMIP6 models show a pronounced cold bias in the coldest daily minimum temperatures for the southern Cascadia region of North AmericaWe find no evidence suggestion that differences in large‐scale dynamics are causing the cold bias in the southern Cascadia regionPoorly resolved topography in the CMIP6 models allows excessive cold advection into southern Cascadia during atmospheric blocking events [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Evaluating species distribution model predictions through time against paleozoological records.
- Author
-
Lazagabaster, Ignacio A., Thomas, Chris D., Spedding, Juliet V., Ikram, Salima, Solano‐Regadera, Irene, Snape, Steven, and Bro‐Jørgensen, Jakob
- Subjects
- *
SPECIES distribution , *CURRENT distribution , *DATA recorders & recording , *HOLOCENE Epoch , *CLIMATE change - Abstract
Species distribution models (SDMs) are widely used to project how species distributions may vary over time, particularly in response climate change. Although the fit of such models to current distributions is regularly enumerated, SDMs are rarely tested across longer time spans to gauge their actual performance under environmental change. Here, we utilise paleozoological presence/absence records to independently assess the predictive accuracy of SDMs through time. To illustrate the approach, we focused on modelling the Holocene distribution of the hartebeest, Alcelaphus buselaphus, a widespread savannah‐adapted African antelope. We applied various modelling algorithms to three occurrence datasets, including a point dataset from online repositories and two range maps representing current and 'natural' (i.e. hypothetical assuming no human impact) distributions. We compared conventional model evaluation metrics which assess fit to current distributions (i.e. True Skill Statistic, TSSc, and Area Under the Curve, AUCc) to analogous 'paleometrics' for past distributions (i.e. TSSp, AUCp, and in addition Boycep, F2‐scorep and Sorensenp). Our findings reveal only a weak correlation between the ranking of conventional metrics and paleometrics, suggesting that the models most effectively capturing present‐day distributions may not be the most reliable to hindcast historical distributions, and that the choice of input data and modelling algorithm both significantly influences environmental suitability predictions and SDM performance. We thus advocate assessment of model performance using paleometrics, particularly those capturing the correct prediction of presences, such as F2‐scorep or Sorensenp, due to the potential unreliability of absence data in paleozoological records. By integrating archaeological and paleontological records into the assessment of alternative models' ability to project shifts in species distributions over time, we are likely to enhance our understanding of environmental constraints on species distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Preparation of Co–Ce@RM catalysts for catalytic ozonation of tetracycline.
- Author
-
Sun, Wenquan, Xie, Yiming, Zhang, Ming, Zhou, Jun, and Sun, Yongjun
- Subjects
- *
CHEMICAL oxygen demand , *SCISSION (Chemistry) , *CATALYTIC activity , *DOUBLE bonds , *DOPING agents (Chemistry) , *REACTIVE oxygen species - Abstract
In this work, a Co–Ce@RM ozone catalyst was developed using red mud (RM), a by‐product of alumina production, as a support material, and its preparation process, catalytic efficiency, and tetracycline (TCN) degradation mechanism were investigated. A comprehensive assessment was carried out using the 3E (environmental, economic, and energy) model. The optimal production conditions for Co–Ce@RM were as follows: The doping ratio of Co and Ce was 1:3, the calcination temperature was 400°C, and the calcination time was 5 h, achieving a maximum removal rate of 87.91% of TCN. The catalyst was characterized using different analytical techniques. Under the conditions of 0.4 L/min ozone aeration rate, with 9% catalyst loading and solution pH 9, the optimal removal rates and chemical oxygen demand by the Co–Ce catalytic ozonation at RM were 94.17% and 75.27%, respectively. Moreover, free radical quenching experiments showed that superoxide radicals (O2−) and singlet oxygen (1O2) were the main active groups responsible for the degradation of TCN. When characterizing the water quality, it was assumed that TCN undergoes degradation pathways such as demethylation, dehydroxylation, double bond cleavage, and ring‐opening reactions under the influence of various active substances. Finally, the 3E evaluation model was deployed to evaluate the Co–Ce@RM catalytic ozonation experiment of TCN wastewater. Practitioner Points: The preparation of Co–Ce@RM provides new ideas for resource utilization of red mud.Catalytic ozonation by Co–Ce@RM can produce 1O2 active oxygen groups.The Co–Ce@RM catalyst can maintain a high catalytic activity after 20 cycles.The degradation pathway of the catalytic ozonation of tetracycline was fully analyzed.Catalytic ozone oxidation processes were evaluated by the "3E" (environmental, economic, and energy) model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Ensemble‐based monthly to seasonal precipitation forecasting for Iran using a regional weather model.
- Author
-
Najafi, Mohammad Saeed and Kuchak, Vahid Shokri
- Subjects
- *
WATER management , *PRECIPITATION forecasting , *METEOROLOGICAL research , *WEATHER forecasting , *LEAD time (Supply chain management) - Abstract
Monthly and seasonal precipitation forecasts can potentially assist disaster risk reduction and water resource management. The aim of this study is to assess the skill of an ensemble framework for monthly and seasonal precipitation forecasts over Iran by focusing on system design and model performance evaluation. The ensemble framework presented in this paper is based on a one‐way double‐nested model that uses Weather Research and Forecasting (WRF) modelling system to downscale the second version of the NCEP Climate Forecast System (CFSv2). The performance is evaluated for October–April period at 1‐, 2‐ and 3‐month lead time. Multiple initial conditions, model parameters and physics are used to construct ensemble members. Using quantile mapping (QM) method, the outputs of the model are bias corrected. This methodology is applied for two periods: (i) climatology from 2000 to 2019 to evaluate the model's ability to precipitation forecast on a monthly and seasonal time scale; (ii) the forecast for 2020 to evaluate the model's performance operationally. The model evaluation is performed using the continuous (e.g., RMSE, r, MBE, NSE) and categorical (e.g., POD, FAR, PC, Heidke skill score) assessment metrics. We conclude that model outputs were improved by the QM bias correction method. According to results, the proposed ensemble framework can accurately predict amount of monthly and seasonal precipitation in Iran with an accuracy of 58 to 45% for lead‐1 to 3. For all three lead times, the averaged NSE, CC, MBE, and RMSE were 0.4, 0.56, −15.5, and 41.6, indicating that the framework has reasonable performance. Our results suggest that precipitation forecast accuracy varies with lead time, so the accuracy for lead‐1 is higher than lead‐2 and lead‐3. Additionally, the model's accuracy differs in various regions of the country and decreases in the spring. Using the approach for an operational case, it was found that the spatial features of precipitation predicted by the framework were close to those observed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A Machine Learning Approach to Reduce Latency in Edge Computing for IoT Devices.
- Author
-
Ali, Muddassar, Khan, Hamayun, Afzal Rana, Muhammad Tausif, Ali, Arshad, Baig, Muhammad Zeeshan, Rehman, Saif ur, and Alsaawy, Yazed
- Abstract
Nowadays, high latency in Edge Computing (EC) for Internet of Things (IoT) devices due to network congestion and online traffic reduces the acquired precision, performance, and processing power of the network. Data overload in IoT significantly impacts the real-time capabilities of user experience, decision- making efficiency, operational costs, and security in EC. By combining EC innovation and three Machine Learning (ML) models, namely Decision Trees (DT), Support Vector Machines (SVMs), and Convolutional Neural Networks (CNNs), this research aims to tackle the inactivity of IoT devices and information cleaning from errors. Its purpose is to preserve information astuteness and highlight the efficacy of each model's execution by using the essential components of previous approaches. The proposed model evaluates the precision, performance, and quality enhancement by measuring the Mean Square Error (MSE), coefficient of determination (R²), and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Novel approaches in geomechanical parameter estimation using machine learning methods and conventional well logs.
- Author
-
Mollaei, Farhad, Moradzadeh, Ali, and Mohebian, Reza
- Subjects
POISSON'S ratio ,YOUNG'S modulus ,SHEAR waves ,PARAMETER estimation ,DRILL core analysis ,DEEP learning ,MACHINE learning - Abstract
Today, geomechanics plays a crucial role in the oil industry, particularly in enhancing production and ensuring well stability. To achieve optimal results, accurate estimation of geomechanical parameters is essential. One of the low-cost and accurate methods for estimating geomechanical parameters is the use of intelligent methods. In this research, geomechanical parameters are estimated using conventional data logs using intelligent methods. The aim of this study is to introduce a new machine learning algorithm to estimate geomechanical parameters using conventional data logs in one of the hydrocarbon field wells in southwest Iran. In this article, the shear wave velocity and uniaxial compressive strength (UCS) were estimated using machine learning algorithms. Subsequently, other geomechanical parameters were calculated based on these estimated parameters derived from machine learning algorithms. For shear wave velocity (Vs) prediction using MLP and CLM (CNN+LSTM+MLP) algorithms, First, effective features were selected using the Auto-encoder deep learning algorithm. The selected features for Vs input into the algorithms were Vp, RHOB, CALIPER, and NPHI, and then the Vs is estimated with MLP and CLM algorithm. To evaluate the results, the model was assessed using metrics such as MAE, MAPE, MSE, RMSE, NRMSE, and R
2 on the train, test, and blind datasets. The CLM algorithm consistently demonstrated superior performance across all datasets, including training, testing, and blind data sets. The R2 values for blind data were $R_{MLP}^2 = 0.8727,$ R M L P 2 = 0.8727 , $R_{CLM}^2 = 0.9274$ R C L M 2 = 0.9274 , respectively. These outputs are crucial for estimating subsequent studies. Next, Elastic Young's moduli and Poisson's ratio were calculated, and the dynamic brittleness index was computed using dynamic Young's modulus and Poisson's ratio. Subsequently, UCS values were predicted using machine learning algorithms. Since there were 12 laboratory core samples of UCS available, UCS was initially calculated using relevant empirical relations and data from some available well logs. This process aimed to extrapolate these core results to cover the entire target depth range from 3551.072 to 3799.789 m. Subsequently, a relationship between UCS derived from well logs and laboratory results was established for the depths corresponding to where the laboratory samples were recorded. Following that, an autoencoder deep network was utilized to select effective features for predicting UCS. The selected features for UCS input into the algorithms were Vp, RHOB, and CALIPER. Subsequently, UCS was estimated using MLP and CLM algorithms. To evaluate the results, the model performance was assessed using metrics such as MAE, MAPE, MSE, RMSE, NRMSE, and R2 on the train, test, and blind datasets. Furthermore, the results were checked and evaluated using a set of statistical measures calculated for the train, test, and blind datasets, that $ R_{MLP}^2 = 0.9305, R_{CLM}^2 = 0.9953 $ R M L P 2 = 0.9305 , R C L M 2 = 0.9953 were obtained for blind UCS data. The results demonstrate that CLM achieves high accuracy in estimating these parameters, with deep learning algorithms showing higher determination coefficients and lower errors compared to MLP. Furthermore, UCS and tensile strength were calculated, followed by the computation of static brittleness index. The relationship between dynamic and static brittleness index was also investigated. Overall, the findings indicate that machine learning algorithms are robust and accurate methods for estimating Vs, UCS, and other geomechanical parameters using conventional logs. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
10. Sectoral Size‐Resolved Particle Number Emissions With Speciation: Emission Profile‐Based Quantification and a Case Study in the Yangtze River Delta Region, China.
- Author
-
An, Jingyu, Lu, Yiqun, Huang, Dan Dan, Ding, Xiang, Hu, Qingyao, Yan, Rusha, Qiao, Liping, Zhou, Min, Huang, Cheng, Wang, Hongli, Fu, Qingyan, Yu, Fangqun, and Wang, Lin
- Subjects
PARTICLE size distribution ,EMISSION inventories ,EVIDENCE gaps ,CARBON-black ,POWER plants - Abstract
Sizes and number concentrations are critical parameters for the impact of atmospheric particles on climate and human health. However, comprehensive studies focusing on size‐resolved particle number (PN) emissions from various sectors are scarce. This study aims to fill this research gap by developing sectoral size‐resolved PN emissions for major species including sulfate, organic mass (OM), and black carbon (BC). The size‐resolved emission profiles derived from various measurements in the literature were integrated with a particle mass emission inventory (EI) for 13 major sectors in the Yangtze River Delta region of China as a case study. The particle number size distribution (PNSD) of emitted particles exhibited two distinct peaks: one at approximately 10 nm and the other in the range of 40–60 nm. The primary contributors to PN emissions in the region in 2017 were power plants, gasoline vehicles, diesel vehicles, and cooking sources. In terms of species, OM dominated PN emissions, followed by primary sulfate and then BC. A regional size‐resolved aerosol model employing the size‐resolved PN EI developed here (referred to as the BIN‐SPE experiment) provided reasonably accurate temporal variations of the total PN concentration and captured the PNSD within the size range of 10–300 nm. Uncertainty analysis of sectoral PN emissions across size ranges was carried out and the performance of the BIN‐SPE experiment was compared with those of three commonly used PN emission parameterizations. Our model evaluations highlight future needs for in‐depth investigations into more advanced size‐resolved emissions and secondary OM formation mechanisms. Plain Language Summary: Various sources such as power plants, vehicles, cooking, and others emit numerous small particles. Quantifying their sizes and number emissions is crucial for understanding atmospheric particle number size distribution and its impacts on climate and health. However, few studies comprehensively address sectoral size‐resolved particle number emissions, with most focusing on global scales or single source. This study couples size‐resolved emission profiles reported in literature for 13 major sectors with mass emission inventories to quantify sectoral size‐resolved particle number emissions with speciation for the Yangtze River Delta region in China. The particle number emissions exhibit a bimodal distribution with peaks at approximately 10 and 40–60 nm. Primary contributors are power plants, gasoline vehicles, diesel vehicles, and cooking sources. In terms of species, organic matter dominates particle number emissions, followed by primary sulfate and black carbon. Notably, the peak in emissions with sizes less than 20 nm reconciles the disparity between modeled and observed distributions. To the best of our knowledge, this study represents the first comprehensive examination of sectoral particle number emissions with speciation in China. The method and data sets utilized in this study can be employed to quantify particle number emissions in other regions and are instrumental for impact assessments. Key Points: An emission profile‐based method is applied to quantify sectoral size‐resolved particle number emissions with speciationVehicles, power plants, and cooking dominate bimodal particle number emissions, with organics and primary sulfate as major speciesProminent particle number emissions with sizes less than 20 nm substantially reduce model‐observation disparities [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Assessment of Numerical Forecasts for Hub-Height Wind Resource Parameters during an Episode of Significant Wind Speed Fluctuations.
- Author
-
Mo, Jingyue, Shen, Yanbo, Yuan, Bin, Li, Muyuan, Ding, Chenchen, Jia, Beixi, Ye, Dong, and Wang, Dan
- Subjects
- *
WIND speed , *BOUNDARY layer (Aerodynamics) , *WIND forecasting , *WIND power , *WIND power plants - Abstract
This study conducts a comprehensive evaluation of four scenario experiments using the CMA_WSP, WRF, and WRF_FITCH models to enhance forecasts of hub-height wind speeds at multiple wind farms in Northern China, particularly under significant wind speed fluctuations during high wind conditions. The experiments apply various wind speed calculation methods, including the Monin–Obukhov similarity theory (ST) and wind farm parameterization (WFP), within a 9 km resolution framework. Data from four geographically distinct stations were analyzed to assess their forecast accuracy over a 72 h period, focusing on the transitional wind events characterized by substantial fluctuations. The CMA_WSP model with the ST method (CMOST) achieved the highest scores across the evaluation metrics. Meanwhile, the WRF_FITCH model with the WFP method (FETA) demonstrated superior performance to the other WRF models, achieving the lowest RMSE and a greater stability. Nevertheless, all models encountered difficulties in predicting the exact timing of extreme wind events. This study also explores the effects of these methods on the wind power density (WPD) distribution, emphasizing the boundary layer's influence at the hub-heighthub-height of 85 m. This influence leads to significant variations in the central and coastal regions. In contrast to other methods that account for the comprehensive effects of the entire boundary layer, the ST method primarily relies on the near-surface 10 m wind speed to calculate the hub-height wind speed. These findings provide important insights for enhancing wind speed and WPD forecasts under transitional weather conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Evaluation of tropical cyclone genesis frequency in FGOALS-g3 large ensemble: mean state and interannual variability.
- Author
-
Zhang, Tingyu, Zhou, Tianjun, Huang, Xin, Zhang, Wenxia, Chen, Xiaolong, Lin, Pengfei, and Li, Lijuan
- Subjects
- *
VERTICAL wind shear , *TROPICAL cyclones , *OCEAN temperature , *ATMOSPHERIC models , *CYCLOGENESIS , *OSCILLATIONS - Abstract
The tropical cyclone genesis frequency (TCGF) is an essential metric for gauging the performance of climate models. Previous evaluations on CMIP family models usually employ one realization for each model and show their diversities in performance. The single model initial condition large ensemble experiments provide a unique opportunity to quantify how internal variability may affect the model evaluation skill. Here, taking the TCGF in the Western North Pacific (WNP) as an example, we use two genesis potential indices as proxies to evaluate the performance of the FGOALS-g3 large ensemble simulation with 110 members. We show that while internal variability does not have a significant influence on the TCGF mean state evaluation, the TCGF-ENSO (El Niño–Southern Oscillation) relationship is significantly modulated by the decadal scale internal variability. For mean state simulation, the FGOALS-g3 large ensembles show reasonable performance in the simulation of TCGF spatial pattern but have differences compared with ERA5 in magnitude. Physical process analysis indicates that compared with ERA5, nearly all dynamic terms are more unfavorable for tropical cyclogenesis due to the cold sea surface temperature anomalies in the midlatitude, while the thermodynamic terms are more conducive to more TCs. For interannual variability, the ENSO-TCGF connection is significantly modulated by the tropical Pacific decadal variability (TPDV) mode by influencing the vertical wind shear in the WNP. Particularly, the model simulation skill depends on the choice of genesis potential indices. Our finding highlights the importance of considering decadal-scale internal variability in the evaluation of interannual ENSO-TCGF variability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Elevation-dependent biases of raw and bias-adjusted EURO-CORDEX regional climate models in the European Alps.
- Author
-
Matiu, Michael, Napoli, Anna, Kotlarski, Sven, Zardi, Dino, Bellin, Alberto, and Majone, Bruno
- Subjects
- *
ATMOSPHERIC models , *ALPINE regions , *CLIMATE change , *ALTITUDES , *SEASONS - Abstract
Data from the EURO-CORDEX ensemble of regional climate model simulations and the CORDEX-Adjust dataset were evaluated over the European Alps using multiple gridded observational datasets. Biases, which are here defined as the difference between models and observations, were assessed as a function of the elevation for different climate indices that span average and extreme conditions. Moreover, we assessed the impact of different observational datasets on the evaluation, including E-OBS, APGD, and high-resolution national datasets. Furthermore, we assessed the bi-variate dependency of temperature and precipitation biases, their temporal evolution, and the impact of different bias adjustment methods and bias adjustment reference datasets. Biases in seasonal temperature, seasonal precipitation, and wet-day frequency were found to increase with elevation. Differences in temporal trends between RCMs and observations caused a temporal dependency of biases, which could be removed by detrending both observations and RCMs. The choice of the reference observation datasets used for bias adjustment turned out to be more relevant than the choice of the bias adjustment method itself. Consequently, climate change assessments in mountain regions need to pay particular attention to the choice of observational dataset and, furthermore, to the elevation dependence of biases and the increasing observational uncertainty with elevation in order to provide robust information on future climate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. ConvNext as a Basis for Interpretability in Coffee Leaf Rust Classification.
- Author
-
Chavarro, Adrian, Renza, Diego, and Moya-Albor, Ernesto
- Subjects
- *
CONVOLUTIONAL neural networks , *IMAGE recognition (Computer vision) , *ARTIFICIAL intelligence , *DEEP learning , *CLASSIFICATION , *COFFEE - Abstract
The increasing complexity of deep learning models can make it difficult to interpret and fit models beyond a purely accuracy-focused evaluation. This is where interpretable and eXplainable Artificial Intelligence (XAI) come into play to facilitate an understanding of the inner workings of models. Consequently, alternatives have emerged, such as class activation mapping (CAM) techniques aimed at identifying regions of importance for an image classification model. However, the behavior of such models can be highly dependent on the type of architecture and the different variants of convolutional neural networks. Accordingly, this paper evaluates three Convolutional Neural Network (CNN) architectures (VGG16, ResNet50, ConvNext-T) against seven CAM models (GradCAM, XGradCAM, HiResCAM, LayerCAM, GradCAM++, GradCAMElementWise, and EigenCAM), indicating that the CAM maps obtained with ConvNext models show less variability among them, i.e., they are less dependent on the selected CAM approach. This study was performed on an image dataset for the classification of coffee leaf rust and evaluated using the RemOve And Debias (ROAD) metric. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Predicting CO2 production of lactating dairy cows from animal, dietary, and production traits using an international dataset.
- Author
-
Kjeldsen, M. H, Johansen, M., Weisbjerg, M.R., Hellwing, A.L.F., Bannink, A., Colombini, S., Crompton, L., Dijkstra, J., Eugène, M., Guinguina, A., Hristov, A.N., Huhtanen, P., Jonker, A., Kreuzer, M., Kuhla, B., Martin, C., Moate, P.J., Niu, P., Peiren, N., and Reynolds, C.
- Subjects
- *
CARBON dioxide , *DAIRY cattle , *MILK yield , *COMPOSITION of milk , *ABSOLUTE value , *LACTATION in cattle - Abstract
The list of standard abbreviations for JDS is available at adsa.org/jds-abbreviations-24. Nonstandard abbreviations are available in the Notes. Automated measurements of the ratio of concentrations of methane and carbon dioxide, [CH 4 ]:[CO 2 ], in breath from individual animals (the so-called "sniffer technique") and estimated CO 2 production can be used to estimate CH 4 production, provided that CO 2 production can be reliably calculated. This would allow CH 4 production from individual cows to be estimated in large cohorts of cows, whereby ranking of cows according to their CH 4 production might become possible and their values could be used for breeding of low CH 4 -emitting animals. Estimates of CO 2 production are typically based on predictions of heat production, which can be calculated from body weight (BW), energy-corrected milk yield, and days of pregnancy. The objectives of the present study were to develop predictions of CO 2 production directly from milk production, dietary, and animal variables, and furthermore to develop different models to be used for different scenarios, depending on available data. An international dataset with 2,244 records from individual lactating cows including CO 2 production and associated traits, as dry matter intake (DMI), diet composition, BW, milk production and composition, days in milk, and days pregnant, was compiled to constitute the training dataset. Research location and experiment nested within research location were included as random intercepts. The method of CO 2 production measurement (respiration chamber [RC] or GreenFeed [GF]) was confounded with research location, and therefore excluded from the model. In total, 3 models were developed based on the current training dataset: model 1 ("best model"), where all significant traits were included; model 2 ("on-farm model"), where DMI was excluded; and model 3 ("reduced on-farm model"), where both DMI and BW were excluded. Evaluation on test dat sets with either RC data (n = 103), GF data without additives (n = 478), or GF data only including observations where nitrate, 3-nitrooxypropanol (3-NOP), or a combination of nitrate and 3-NOP were fed to the cows (GF+: n = 295), showed good precision of the 3 models, illustrated by low slope bias both in absolute values (−0.22 to 0.097) and in percentage (0.049 to 4.89) of mean square error (MSE). However, the mean bias (MB) indicated systematic overprediction and underprediction of CO 2 production when the models were evaluated on the GF and the RC test datasets, respectively. To address this bias, the 3 models were evaluated on a modified test dataset, where the CO 2 production (g/d) was adjusted by subtracting (where measurements were obtained by RC) or adding absolute MB (where measurements were obtained by GF) from evaluation of the specific model on RC, GF, and GF+ test datasets. With this modification, the absolute values of MB and MB as percentage of MSE became negligible. In conclusion, the 3 models were precise in predicting CO 2 production from lactating dairy cows. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Methodology and evaluation in sports analytics: challenges, approaches, and lessons learned.
- Author
-
Davis, Jesse, Bransen, Lotte, Devos, Laurens, Jaspers, Arne, Meert, Wannes, Robberechts, Pieter, Van Haaren, Jan, and Van Roy, Maaike
- Subjects
DATA analytics ,MACHINE learning ,EVALUATION methodology ,EXPERTISE ,ACQUISITION of data - Abstract
There has been an explosion of data collected about sports. Because such data is extremely rich and complex, machine learning is increasingly being used to extract actionable insights from it. Typically, machine learning is used to build models and indicators that capture the skills, capabilities, and tendencies of athletes and teams. Such indicators and models are in turn used to inform decision-making at professional clubs. Designing these indicators requires paying careful attention to a number of subtle issues from a methodological and evaluation perspective. In this paper, we highlight these challenges in sports and discuss a variety of approaches for handling them. Methodologically, we highlight that dependencies affect how to perform data partitioning for evaluation as well as the need to consider contextual factors. From an evaluation perspective, we draw a distinction between evaluating the developed indicators themselves versus the underlying models that power them. We argue that both aspects must be considered, but that they require different approaches. We hope that this article helps bridge the gap between traditional sports expertise and modern data analytics by providing a structured framework with practical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Insights from Augmented Data Integration and Strong Regularization in Drug Synergy Prediction with SynerGNet.
- Author
-
Liu, Mengmeng, Srivastava, Gopal, Ramanujam, J., and Brylinski, Michal
- Subjects
GRAPH neural networks ,DRUG synergism ,DATA augmentation ,COMPUTATIONAL biology ,DEEP learning - Abstract
SynerGNet is a novel approach to predicting drug synergy against cancer cell lines. In this study, we discuss in detail the construction process of SynerGNet, emphasizing its comprehensive design tailored to handle complex data patterns. Additionally, we investigate a counterintuitive phenomenon when integrating more augmented data into the training set results in an increase in testing loss alongside improved predictive accuracy. This sheds light on the nuanced dynamics of model learning. Further, we demonstrate the effectiveness of strong regularization techniques in mitigating overfitting, ensuring the robustness and generalization ability of SynerGNet. Finally, the continuous performance enhancements achieved through the integration of augmented data are highlighted. By gradually increasing the amount of augmented data in the training set, we observe substantial improvements in model performance. For instance, compared to models trained exclusively on the original data, the integration of the augmented data can lead to a 5.5% increase in the balanced accuracy and a 7.8% decrease in the false positive rate. Through rigorous benchmarks and analyses, our study contributes valuable insights into the development and optimization of predictive models in biomedical research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. An Adaptive Surrogate-Assisted Particle Swarm Optimization Algorithm Combining Effectively Global and Local Surrogate Models and Its Application.
- Author
-
Qu, Shaochun, Liu, Fuguang, and Cao, Zijian
- Subjects
ADAPTIVE control systems ,PARTICLE swarm optimization ,GAUSSIAN distribution ,ALGORITHMS ,AEROFOILS ,PREDICTION models ,EVOLUTIONARY algorithms - Abstract
Numerous surrogate-assisted evolutionary algorithms have been proposed for expensive optimization problems. However, each surrogate model has its own characteristics and different applicable situations, which caused a serious challenge for model selection. To alleviate this challenge, this paper proposes an adaptive surrogate-assisted particle swarm optimization (ASAPSO) algorithm by effectively combining global and local surrogate models, which utilizes the uncertainty level of the current population state to evaluate the approximation ability of the surrogate model in its predictions. In ASAPSO, the transformation between local and global surrogate models is controlled by an adaptive Gaussian distribution parameter with a gauge of the advisability to improve the search process with better local exploration and diversity in uncertain solutions. Four expensive optimization benchmark functions and an airfoil aerodynamic real-world engineering optimization problem are utilized to validate the effectiveness and performance of ASAPSO. Experimental results demonstrate that ASAPSO has superiority in terms of solution accuracy compared with state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Predicting CO2 production of lactating dairy cows from animal, dietary, and production traits using an international dataset
- Author
-
M. H Kjeldsen, M. Johansen, M.R. Weisbjerg, A.L.F. Hellwing, A. Bannink, S. Colombini, L. Crompton, J. Dijkstra, M. Eugène, A. Guinguina, A.N. Hristov, P. Huhtanen, A. Jonker, M. Kreuzer, B. Kuhla, C. Martin, P.J. Moate, P. Niu, N. Peiren, C. Reynolds, S.R.O. Williams, and P. Lund
- Subjects
tracer gas ,cattle ,heat production ,model evaluation ,Dairy processing. Dairy products ,SF250.5-275 ,Dairying ,SF221-250 - Abstract
ABSTRACT: Automated measurements of the ratio of concentrations of methane and carbon dioxide, [CH4]:[CO2], in breath from individual animals (the so-called “sniffer technique”) and estimated CO2 production can be used to estimate CH4 production, provided that CO2 production can be reliably calculated. This would allow CH4 production from individual cows to be estimated in large cohorts of cows, whereby ranking of cows according to their CH4 production might become possible and their values could be used for breeding of low CH4-emitting animals. Estimates of CO2 production are typically based on predictions of heat production, which can be calculated from body weight (BW), energy-corrected milk yield, and days of pregnancy. The objectives of the present study were to develop predictions of CO2 production directly from milk production, dietary, and animal variables, and furthermore to develop different models to be used for different scenarios, depending on available data. An international dataset with 2,244 records from individual lactating cows including CO2 production and associated traits, as dry matter intake (DMI), diet composition, BW, milk production and composition, days in milk, and days pregnant, was compiled to constitute the training dataset. Research location and experiment nested within research location were included as random intercepts. The method of CO2 production measurement (respiration chamber [RC] or GreenFeed [GF]) was confounded with research location, and therefore excluded from the model. In total, 3 models were developed based on the current training dataset: model 1 (“best model”), where all significant traits were included; model 2 (“on-farm model”), where DMI was excluded; and model 3 (“reduced on-farm model”), where both DMI and BW were excluded. Evaluation on test dat sets with either RC data (n = 103), GF data without additives (n = 478), or GF data only including observations where nitrate, 3-nitrooxypropanol (3-NOP), or a combination of nitrate and 3-NOP were fed to the cows (GF+: n = 295), showed good precision of the 3 models, illustrated by low slope bias both in absolute values (−0.22 to 0.097) and in percentage (0.049 to 4.89) of mean square error (MSE). However, the mean bias (MB) indicated systematic overprediction and underprediction of CO2 production when the models were evaluated on the GF and the RC test datasets, respectively. To address this bias, the 3 models were evaluated on a modified test dataset, where the CO2 production (g/d) was adjusted by subtracting (where measurements were obtained by RC) or adding absolute MB (where measurements were obtained by GF) from evaluation of the specific model on RC, GF, and GF+ test datasets. With this modification, the absolute values of MB and MB as percentage of MSE became negligible. In conclusion, the 3 models were precise in predicting CO2 production from lactating dairy cows.
- Published
- 2024
- Full Text
- View/download PDF
20. Brand-driven identity development of places: application, evaluation and improvement suggestions of the BIDP-framework
- Author
-
Maffei, Davide
- Published
- 2024
- Full Text
- View/download PDF
21. Characteristics Evaluation and Application Analysis on Animal Models of Recurrent Spontaneous Abortion
- Author
-
DING Tiansong, XIE Jinghong, YANG Bin, LI Heqiao, QIAO Yizhuo, CHEN Xinru, TIAN Wenfan, LI Jiapei, ZHANG Wanyi, and LI Fanxuan
- Subjects
recurrent spontaneous abortion ,animal model ,model characteristics ,model evaluation ,Medicine - Abstract
Objective To summarize and evaluate the characteristics of current recurrent spontaneous abortion (RSA) animal models at home and abroad, and to provide reference and guidance for the standardized preparation of RSA models.Methods"Recurrent spontaneous abortion" and "animal model" were used as co-keywords in CNKI, Wanfang, VIP, PubMed and Web of Science databases to search the RSA animal experimental literature, covering the period up to January 20, 2024, and a total of 1 411 articles were collected. The analysis focused on construction methods and essential elements of RSA animal models, the modeling process and result evaluation, as well as the application of these models in pharmacological and pharmacodynamic research. An Excel table was established for systematic analysis and discussion. Results A total of 138 experimental studies were obtained after screening. In constructing RSA animal models, immunological models were the most widely used in Western medicine (96.92%), with the Clark model being the main one (92.31%). In traditional Chinese medicine (TCM) models, 70.00% were kidney deficiency-luteal inhibition-syndrome combination models, 20.00% were kidney deficiency and blood stasis models, and 10.00% were deficiency-heat syndrome models. Most animals were selected at 6-8 weeks (33.86%) and 8 weeks (32.28%) of age. The majority of animals were paired for mating at 18:00 on the day of cage pairing. In 81.03% of literatures, vaginal plugs were checked once the following morning, with 8:00 being the most common time (17.02%). The most commonly used drug administration cycle was 14 days of continuous gavage after pregnancy. Among the tested drugs, Western drugs were mainly protein-based (29.17%), while TCM drugs were mainly TCM decoction (81.11%). The most frequently used methods for detecting indicators included visual observation of embryos (22.54%), western blot (15.96%), PCR (13.58%), ELISA (12.91%), HE staining (10.80%) and immunohistochemistry (9.39%). Conclusion The etiology of RSA is complex, and corresponding animal models should be established based on different etiologies. Clark model is commonly used in the construction of Western medicine model, while the kidney deficiency-luteal inhibition-syndrome combination model is predominant in TCM. RSA animal model is widely used in related research, but systematic evaluation needs to be strengthened.
- Published
- 2024
- Full Text
- View/download PDF
22. The historical to future linkage of Arctic amplification on extreme precipitation over the Northern Hemisphere using CMIP5 and CMIP6 models
- Author
-
Jun Liu, Xiao-Fan Wang, Dong-You Wu, and Xin Wang
- Subjects
Arctic amplification ,Extreme precipitation ,CMIP5 ,CMIP6 ,Model evaluation ,Planetary waves ,Meteorology. Climatology ,QC851-999 ,Social sciences (General) ,H1-99 - Abstract
Arctic warming played a dominant role in recent occurrences of extreme events over the Northern Hemisphere, but climate models cannot accurately simulate the relationship. Here a significant positive correlation (0.33–0.95) between extreme precipitation and Arctic amplification (AA) is found using observations and CMIP5/6 multi-model ensembles. However, CMIP6 models are superior to CMIP5 models in simulating the temporal evolution of extreme precipitation and AA. According to 14 optimal CMIP6 models, the maximum latitude of planetary waves and the strength of Northern Hemisphere annular mode (NAM) will increase with increasing AA, contributing to increased extreme precipitation over the Northern Hemisphere. Under the Shared Socioeconomic Pathway SSP5-8.5, AA is expected to increase by 0.85 °C per decade while the maximum latitude of planetary waves will increase by 2.82° per decade. Additionally, the amplitude of the NAM will increase by 0.21 hPa per decade, contributing to a rise in extreme precipitation of 1.17% per decade for R95pTOT and 0.86% per decade for R99pTOT by 2100.
- Published
- 2024
- Full Text
- View/download PDF
23. Insights from Augmented Data Integration and Strong Regularization in Drug Synergy Prediction with SynerGNet
- Author
-
Mengmeng Liu, Gopal Srivastava, J. Ramanujam, and Michal Brylinski
- Subjects
drug synergy prediction ,graph neural networks ,computational biology ,regularization in deep learning ,data augmentation ,model evaluation ,Computer engineering. Computer hardware ,TK7885-7895 - Abstract
SynerGNet is a novel approach to predicting drug synergy against cancer cell lines. In this study, we discuss in detail the construction process of SynerGNet, emphasizing its comprehensive design tailored to handle complex data patterns. Additionally, we investigate a counterintuitive phenomenon when integrating more augmented data into the training set results in an increase in testing loss alongside improved predictive accuracy. This sheds light on the nuanced dynamics of model learning. Further, we demonstrate the effectiveness of strong regularization techniques in mitigating overfitting, ensuring the robustness and generalization ability of SynerGNet. Finally, the continuous performance enhancements achieved through the integration of augmented data are highlighted. By gradually increasing the amount of augmented data in the training set, we observe substantial improvements in model performance. For instance, compared to models trained exclusively on the original data, the integration of the augmented data can lead to a 5.5% increase in the balanced accuracy and a 7.8% decrease in the false positive rate. Through rigorous benchmarks and analyses, our study contributes valuable insights into the development and optimization of predictive models in biomedical research.
- Published
- 2024
- Full Text
- View/download PDF
24. A sigmoidal model for predicting soil thermal conductivity-water content function in room temperature
- Author
-
Ali Reza Sepaskhah and Maasumeh Mazaheri-Tehrani
- Subjects
Logistic equation ,Model evaluation ,Heat probe ,Thermal conductivity measurements ,Thermal properties ,Medicine ,Science - Abstract
Abstract Apparent thermal conductivity of soil (λ) as a function of soil water content (θ), i.e., λ(θ) is needed to determine the heat flow in soil. The function of λ(θ) can be used in heat and water flow models for simplicity. The objective of this study was to develop a sigmoidal model based on logistic equation for entire range of soil water contents and a wide range of soil textures that can be used in simulation of heat and water flow in respected modes. Further, performance of the developed sigmoidal model along with two other models in literature was evaluated. In the proposed sigmoidal model, the constants of this model are estimated based on empirical multivariate equations by using soil sand content and bulk density. The sigmoidal model was validated with good accuracy for a wide range of soil textures, as the relationship between the measured and predicted λ showed slope and intercept values of nearly 1.0 and 0.0, respectively. Comparison of the results obtained by sigmoidal model with those obtained from Johansen and Lu et al. models indicated that, the sigmoidal model was superior to the other two models in prediction of λ for a wide range of soil textures and soil water contents. Furthermore, comparison with a recently proposed model by Xiong et al. indicated that our sigmoidal model is superior. Therefore, our developed sigmoidal model can be used in heat and water flow models to predict the soil temperature and heat flow.
- Published
- 2024
- Full Text
- View/download PDF
25. Correction and Validation of the Inflow Wind Speed of the Fitch Wind Farm Parameterization
- Author
-
Zeming XIE, Ye YU, Longxiang DONG, Teng MA, and Xuewei WANG
- Subjects
wind farm ,parameterization ,numerical simulation ,model evaluation ,large eddy simulation ,Meteorology. Climatology ,QC851-999 - Abstract
Wind farm wakes have a significant impact on momentum and turbulence fluxes within the atmospheric boundary layer, thereby influencing the local climate and environment.Mesoscale numerical models incorporating wind farm parameterizations are powerful tools for studying the climate and environmental impacts of wind farms.In this study, the wind speed and turbulence kinetic energy profiles of the Fitch wind farm parameterization scheme in the WRF mesoscale model are evaluated in the turbine and wake regions using high-resolution Large Eddy Simulations (LES) as “true values” and a method based on the relation derived from classical momentum theory is proposed to correct the grid-inflow wind speed.The method takes into account the blocking effect caused by the grid equivalent thrust and the corrected wind speed is closer to the free-stream wind speed.Results show that the difference between the grid-inflow wind speed from the original Fitch parameterization scheme and the LES is significant and sensitive to the model horizontal resolution.The Fitch-new parameterization with corrected grid-inflow wind speed reduces the relative error in absolute value between grid-inflow and free-stream wind speed to less than 1% across different horizontal resolutions (1000 m, 500 m, and 250 m).The spatially averaged thrust and output power are consistent with LES results.The Fitch-new parameterization improves the simulated wind speed deficit in the wake zone of wind turbine, especially in the grid with turbine under high resolution.Although the simulated increase in turbulent kinetic energy and its vertical distribution in the wake zone is improved compared to the original Fitch scheme, there are certain issues that require further investigation.
- Published
- 2024
- Full Text
- View/download PDF
26. Groundwater Level Prediction for Landslides Using an Improved TANK Model Based on Big Data.
- Author
-
Zheng, Yufeng, Huang, Dong, Fan, Xiaoyi, and Shi, Lili
- Subjects
DEBRIS avalanches ,LANDSLIDE prediction ,RAINFALL ,STORAGE tanks ,WATER levels ,LANDSLIDES ,WATER table - Abstract
Geological conditions and rainfall intensity are two primary factors that can induce changes in groundwater level, which are one of the major triggering causes of geological disasters, such as collapse, landslides, and debris flow. In view of this, an improved TANK model is developed based on the influence of rainfall intensity, terrain, and geological conditions on the groundwater level in order to effectively predict the groundwater level evolution of rainfall landslides. A trapezoidal structure is used instead of the traditional rectangular structure to define the nonlinear change in a water level section to accurately estimate the storage of groundwater in rainfall landslides. Furthermore, big data are used to extract effective features from large-scale monitoring data. Here, we build prediction models to accurately predict changes in groundwater levels. Monitoring data of the Taziping landslide are taken as the reference for the study. The simulation results of the traditional TANK model and the improved TANK model are compared with the actual monitoring data, which proves that the improved TANK model can effectively simulate the changing trend in the groundwater level with rainfall. The study can provide a reliable basis for predicting and evaluating the change in the groundwater state in rainfall-type landslides. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Evaluating the performance of a system model in predicting zooplankton dynamics: Insights from the Bering Sea ecosystem.
- Author
-
Sullaway, Genoa, Cunningham, Curry J., Kimmel, David, Pilcher, Darren J., and Thorson, James T.
- Subjects
- *
FISHERY management , *FISHERY resources , *SPECIES distribution , *FUNCTIONAL groups , *INFORMATION resources - Abstract
Understanding how ecosystem change influences fishery resources through trophic pathways is a key tenet of ecosystem‐based fishery management. System models (SM), which use numerical modeling to describe physical and biological processes, can advance inclusion of ecosystem and prey information in fisheries management; however, incorporating SMs in management requires evaluation against empirical data. The Bering Ecosystem Study Nutrient‐Phytoplankton‐Zooplankton (BESTNPZ) model is an SM (originally created by the Bering Ecosystem Study, which initiated in 2006 and was expanded by Kearney et al.) includes zooplankton biomass hindcasts for the Bering Sea. In the Bering Sea, zooplankton are an important prey item for fishery species, yet the zooplankton component of this SM has not been validated against empirical data. We compared empirical zooplankton data to BESTNPZ hindcast estimates for three zooplankton functional groups and found that the two sources of information are on different absolute scales. We found high correlation between relative seasonal biomass trends estimated by BESTNPZ and empirical data for large off‐shelf copepods (
Neocalanus spp.) and low correlations for large on‐shelf copepods and small copepods (Calanus spp. andPseudocalanus spp., respectively). To address these discrepancies, we constructed hybrid species distribution models (H‐SDM), which predict zooplankton biomass using the BESTNPZ hindcast and environmental covariates. We found that H‐SDMs offered marginal improvements over correlative species distribution models (C‐SDMs) relying solely on empirical data for spatial extrapolation and little improvement for most functional groups when forecasting short‐term temporal zooplankton biomass trends. Overall, we suggest that interpretation of current BESTNPZ hindcasts should be tempered by our understanding of key mismatches in absolute scale, seasonality, and annual indices between BESTNPZ and empirical data. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
28. Characteristics of the temperature correlation network of climate models.
- Author
-
Wang, Tingyu, Gong, Zhiqiang, Yuan, Naiming, Liu, Wenqi, Qiao, Panjie, and Feng, Guolin
- Subjects
- *
ATMOSPHERIC temperature , *ATMOSPHERIC models , *DEBYE temperatures , *LEAD time (Supply chain management) , *PREDICTION models - Abstract
Temperature correlation network has been commonly used in climate monitoring, diagnosing and prediction using reanalysis data, however its application in the network analysis of climate dynamical models hasn't been deeply studied. We construct a temperature correlation network based on near-surface 2m air temperature of four climate models by comparing their capability of properly capturing the structural characteristics of the temperature networks, and further conduct a comparative analysis of the topological differences among different models. The features of temperature correlation networks varied significantly among the four models, in which the ECMWF-SYSTEM5 model network has the highest connectivity among the four models, while the NCEP_CFS2 model has the lowest one. It is also revealed that the model with higher connectivity normally has a stronger correlation between the nodes of the air temperature correlation network, which is likely attributed to the model's stronger teleconnection, regional consistency and smaller standard deviation between predicted temperature series of most two grid points. It is also implied that the model's prediction skills have a probable relationship with the network structure. For each model, the 1-month lead prediction has the highest prediction skill corresponding to the model having the connectivity close to the observation. With the increase of prediction lead times, connectivity bias has a quick rise, and the prediction skill has an obvious decrease. However, for different models at the same prediction lead time, it is not the case that the larger the connectivity deviation the lower the prediction skill, for example, the ECMWF_SYSTEM5 model has the highest prediction skill and the largest connectivity deviation, we find that the ECMWF_SYSTEM5 model network possesses significantly higher connectivity and more distinctive small-world characteristics, which implies a stable network structure helps to improve the prediction skills of the model. Therefore, this study can deepen our understanding of climate models and provide guidance for improving the ability of models to properly simulate climate features. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Understanding the ENSO–East Asian winter monsoon relationship in CMIP6 models: performance evaluation and influencing factors.
- Author
-
Guo, Wenxiao, Hao, Xin, Zhou, Botao, Li, Jiandong, and Han, Tingting
- Subjects
- *
OCEAN temperature , *ANTICYCLONES , *MONSOONS ,EL Nino ,KUROSHIO - Abstract
It is widely recognized that the East Asian winter monsoon (EAWM) has independent northern and southern modes. This study evaluated the capability of 18 Coupled Model Intercomparison Project Phase 6 models in simulating the relationship between the EAWM and El Niño–Southern Oscillation (ENSO), together with possible causes of inter-model bias. Simulations generated by 12 good-performing models demonstrate that the close relationship between the EAWM southern mode and ENSO depends on the position of the ENSO-related tripolar pattern of sea surface temperature (SST) anomalies in the tropical Indian and Pacific oceans. The positive feedback of anomalous Philippine anticyclones/cyclones with the ENSO-related east–west SST gradient over the Maritime Continent favors anomalous southerly/northerly winds in southeastern China. In contrast, the strength of the Philippine anticyclone, which is influenced by the intensity of ENSO variability, partly affects the ability of models to simulate the relationship between the EAWM northern mode and ENSO. Strong Philippine anticyclonic/cyclonic anomalies extend anomalous southerly/northerly winds from low to mid-high latitudes in East Asia. Another crucial factor in the relationship between the EAWM northern mode and ENSO is the ENSO-related Indian Ocean and South China Sea SST anomalies, which induce a poleward wave train through anomalous latent heating and subsequently generate Kuroshio anticyclonic/cyclonic anomalies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. 复发性流产动物模型特点评价与应用分析.
- Author
-
丁天送, 谢京红, 杨 斌, 李河桥, 乔一倬, 陈心如, 田纹凡, 李佳佩, 张婉怡, and 李帆旋
- Abstract
Objective To summarize and evaluate the characteristics of current recurrent spontaneous abortion (RSA) animal models at home and abroad, and to provide reference and guidance for the standardized preparation of RSA models. Methods "Recurrent spontaneous abortion" and "animal model" were used as co-keywords in CNKI, Wanfang, VIP, PubMed and Web of Science databases to search the RSA animal experimental literature, covering the period up to January 20, 2024, and a total of 1 411 articles were collected. The analysis focused on construction methods and essential elements of RSA animal models, the modeling process and result evaluation, as well as the application of these models in pharmacological and pharmacodynamic research. An Excel table was established for systematic analysis and discussion. Results A total of 138 experimental studies were obtained after screening. In constructing RSA animal models, immunological models were the most widely used in Western medicine (96.92%), with the Clark model being the main one (92.31%). In traditional Chinese medicine (TCM) models, 70.00% were kidney deficiency-luteal inhibition-syndrome combination models, 20.00% were kidney deficiency and blood stasis models, and 10.00% were deficiency-heat syndrome models. Most animals were selected at 6-8 weeks (33.86%) and 8 weeks (32.28%) of age. The majority of animals were paired for mating at 18:00 on the day of cage pairing. In 81.03% of literatures, vaginal plugs were checked once the following morning, with 8: 00 being the most common time (17.02%). The most commonly used drug administration cycle was 14 days of continuous gavage after pregnancy. Among the tested drugs, Western drugs were mainly proteinbased (29.17%), while TCM drugs were mainly TCM decoction (81.11%). The most frequently used methods for detecting indicators included visual observation of embryos (22.54%), western blot (15.96%), PCR (13.58%), ELISA (12.91%), HE staining (10.80%) and immunohistochemistry (9.39%). Conclusion The etiology of RSA is complex, and corresponding animal models should be established based on different etiologies. Clark model is commonly used in the construction of Western medicine model, while the kidney deficiencyluteal inhibition-syndrome combination model is predominant in TCM. RSA animal model is widely used in related research, but systematic evaluation needs to be strengthened. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Risks of regionalized stock assessments for widely distributed species like the panmictic European eel.
- Author
-
Höhne, Leander, Briand, Cédric, Freese, Marko, Marohn, Lasse, Pohlmann, Jan-Dag, van der Hammen, Tessa, and Hanel, Reinhold
- Subjects
- *
ANGUILLA anguilla , *EELS , *EXTRAPOLATION , *BIOMASS , *RISK assessment - Abstract
In fisheries management, accurate stock assessment is pivotal to determine sustainable harvest levels or the scope of conservation measures. When assessment is decentralized and methods differ regionally, adopted approaches must be subjected to rigorous quality-checking, as biased assessments may mislead management decisions. To enable recovery of the critically endangered European eel, EU countries must fulfill a biomass target of potential spawner ("silver eel") escapement, while local eel stock assessment approaches vary widely. We summarize local approaches and results of ground-truthing studies based on direct silver eel monitoring, to evaluate the accuracy of eel stock assessments in retrospect and identify bias sources. A substantial fraction of eel habitat is currently unassessed or assessed by unvalidated approaches. Across assessment models for which validation exists, demographic models frequently overestimated actual escapement, while misestimations of extrapolation ("spatial") models were more balanced, slightly underestimating escapement. Stock size overestimation may lead to overexploitation or insufficient conservation measures, increasing the risk of stock collapse or slow recovery in coordinated frameworks. Underestimations may imply inefficient allocation of conservation efforts or negatively affect socioeconomy. Our work highlights the risks of regionalizing assessment responsibilities along with management decisions, calling for a common assessment toolbox and centralized quality-checking routines for eel. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Unveiling the impact of unchanged modules across versions on the evaluation of within‐project defect prediction models.
- Author
-
Liu, Xutong, Zhou, Yufei, Lu, Zeyu, Mei, Yuanqing, Yang, Yibiao, Qian, Junyan, and Zhou, Yuming
- Subjects
- *
PREDICTION models , *SOURCE code , *MULTIPLE comparisons (Statistics) , *DATA modeling , *FORECASTING - Abstract
Background Problem Method Results Conclusion Software defect prediction (SDP) is a topic actively researched in the software engineering community. Within‐project defect prediction (WPDP) involves using labeled modules from previous versions of the same project to train classifiers. Over time, many defect prediction models have been evaluated under the WPDP scenario.Data duplication poses a significant challenge in current WPDP evaluation procedures. Unchanged modules, characterized by identical executable source code, are frequently present in both target and source versions during experimentation. However, it is still unclear how and to what extent the presence of unchanged modules affects the performance assessment of WPDP models and the comparison of multiple WPDP models.In this paper, we provide a method to detect and remove unchanged modules from defect datasets and unveil the impact of data duplication in WPDP on model evaluation.The experiments conducted on 481 target versions from 62 projects provide evidence that data duplication significantly affects the reported performance values of individual learners in WPDP. However, when ranking multiple WPDP models based on prediction performance, the impact of removing unchanged instances is not substantial. Nevertheless, it is important to note that removing unchanged instances does have a slight influence on the selection of models with better generalization.We recommend that future WPDP studies take into consideration the removal of unchanged modules from target versions when evaluating the performance of their models. This practice will enhance the reliability and validity of the results obtained in WPDP research, leading to improved understanding and advancements in defect prediction models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. SynBPS: a parametric simulation framework for the generation of event-log data.
- Author
-
Riess, Mike
- Abstract
In the pursuit of ecological validity, current business process simulation methods are calibrated to data from existing processes. This is important for realistic what-if analysis in the context of these processes. However, this is not always the "right tool for the job." To test hypotheses in the area of predictive process monitoring, it can be more helpful to simulate event-log data from a theoretical process, where all aspects can be manipulated. One example is when assessing the influence of process complexity or variability on the performance of a new prediction method. In this case, the ability to include control variables and systematically change process characteristics is a key to fully understanding their influence. Calibrating a simulation model from observed data alone can in these cases be limiting. This paper proposes a simulation framework, Synthetic Business Process Simulation (SynBPS), a Python library for the generation of event-log data from synthetic processes. Aspects such as process complexity, stability, trace distribution, duration distribution, and case arrivals can be fully controlled by the user. The overall architecture is described in detail, and a demonstration of the framework is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Prediction of Femtosecond Laser Etching Parameters Based on a Backpropagation Neural Network with Grey Wolf Optimization Algorithm.
- Author
-
Liu, Yuhui, Shangguan, Duansen, Chen, Liping, Su, Chang, and Liu, Jing
- Subjects
LASER engraving ,OPTIMIZATION algorithms ,FEMTOSECOND lasers ,MANUFACTURING processes ,PYTHON programming language - Abstract
Investigating the optimal laser processing parameters for industrial purposes can be time-consuming. Moreover, an exact analytic model for this purpose has not yet been developed due to the complex mechanisms of laser processing. The main goal of this study was the development of a backpropagation neural network (BPNN) with a grey wolf optimization (GWO) algorithm for the quick and accurate prediction of multi-input laser etching parameters (energy, scanning velocity, and number of exposures) and multioutput surface characteristics (depth and width), as well as to assist engineers by reducing the time and energy require for the optimization process. The Keras application programming interface (API) Python library was used to develop a GWO-BPNN model for predictions of laser etching parameters. The experimental data were obtained by adopting a 30 W laser source. The GWO-BPNN model was trained and validated on experimental data including the laser processing parameters and the etching characterization results. The R
2 score, mean absolute error (MAE), and mean squared error (MSE) were examined to evaluate the prediction precision of the model. The results showed that the GWO-BPNN model exhibited excellent accuracy in predicting all properties, with an R2 value higher than 0.90. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
35. A sigmoidal model for predicting soil thermal conductivity-water content function in room temperature.
- Author
-
Sepaskhah, Ali Reza and Mazaheri-Tehrani, Maasumeh
- Subjects
- *
SOIL moisture , *SOIL texture , *SOIL temperature , *SOILS , *THERMAL conductivity - Abstract
Apparent thermal conductivity of soil (λ) as a function of soil water content (θ), i.e., λ(θ) is needed to determine the heat flow in soil. The function of λ(θ) can be used in heat and water flow models for simplicity. The objective of this study was to develop a sigmoidal model based on logistic equation for entire range of soil water contents and a wide range of soil textures that can be used in simulation of heat and water flow in respected modes. Further, performance of the developed sigmoidal model along with two other models in literature was evaluated. In the proposed sigmoidal model, the constants of this model are estimated based on empirical multivariate equations by using soil sand content and bulk density. The sigmoidal model was validated with good accuracy for a wide range of soil textures, as the relationship between the measured and predicted λ showed slope and intercept values of nearly 1.0 and 0.0, respectively. Comparison of the results obtained by sigmoidal model with those obtained from Johansen and Lu et al. models indicated that, the sigmoidal model was superior to the other two models in prediction of λ for a wide range of soil textures and soil water contents. Furthermore, comparison with a recently proposed model by Xiong et al. indicated that our sigmoidal model is superior. Therefore, our developed sigmoidal model can be used in heat and water flow models to predict the soil temperature and heat flow. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. How Well Does the DOE Global Storm Resolving Model Simulate Clouds and Precipitation Over the Amazon?
- Author
-
Tian, Jingjing, Zhang, Yunyan, Klein, Stephen A., Terai, Christopher R., Caldwell, Peter M., Beydoun, Hassan, Bogenschutz, Peter, Ma, Hsi‐Yen, and Donahue, Aaron S.
- Subjects
- *
ATMOSPHERIC radiation measurement , *STORMS , *ATMOSPHERIC models , *SURFACE of the earth , *RAINFALL , *RAINSTORMS - Abstract
This study assesses a 40‐day 3.25‐km global simulation of the Simple Cloud‐Resolving E3SM Model (SCREAMv0) using high‐resolution ground‐based observations from the Atmospheric Radiation Measurement (ARM) Green Ocean Amazon (GoAmazon) field campaign. SCREAMv0 reasonably captures the diurnal timing of boundary layer clouds yet underestimates the boundary layer cloud fraction and mid‐level congestus. SCREAMv0 well replicates the precipitation diurnal cycle, however it exhibits biases in the precipitation cluster size distribution compared to scanning radar observations. Specifically, SCREAMv0 overproduces clusters smaller than 128 km, and does not form enough large clusters. Such biases suggest an inhibition of convective upscale growth, preventing isolated deep convective clusters from evolving into larger mesoscale systems. This model bias is partially attributed to the misrepresentation of land‐atmosphere coupling. This study highlights the potential use of high‐resolution ground‐based observations to diagnose convective processes in global storm resolving model simulations, identify key model deficiencies, and guide future process‐oriented model sensitivity tests and detailed analyses. Plain Language Summary: This research examines how well a kilometer grid scale global atmospheric model—the Simple Cloud‐Resolving Energy Exascale Earth System Model (SCREAMv0)—performs in simulating clouds and rainfall over the Amazon rainforest region. The model was assessed by comparing to high‐resolution ground‐based observations from the Green Ocean Amazon field campaign supported by the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program. The model struggles to produce enough middle‐level clouds. When comparing the simulated rainfall to radar observations, SCREAMv0 showed good performance on the diurnal pattern of rain rate, but tends to form too many small rain clusters while failing to create large ones. A possible contributor to these errors could be the inaccurate depiction of how the earth's surface and the atmosphere interact within the model. Overall, this study shows that using detailed DOE ARM data can help improve our understanding of clouds and rainfall in global storm resolving kilometer grid scale models. Key Points: Convective processes in a global storm resolving model (SCREAMv0) are evaluated using ground‐based observations over a tropical rainforestSCREAMv0 captures the morning development of shallow convection and the early afternoon precipitation peak but lacks mid‐level congestusSCREAMv0 struggles to form large precipitation clusters greater than 128 km and produces smaller ones more often than observed [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Model evaluation in human factors and ergonomics (HFE) sciences; case of trust in automation.
- Author
-
Poornikoo, Mehdi and Øvergård, Kjell Ivar
- Subjects
- *
ERGONOMICS , *RESEARCH funding , *EMPIRICAL research , *PSYCHOLOGICAL adaptation , *TRUST , *MATHEMATICAL models , *RESEARCH , *AUTOMATION , *THEORY - Abstract
Theories and models are central to Human Factors/Ergonomics (HFE) sciences for producing new knowledge, pushing the boundaries of the field, and providing a basis for designing systems that can improve human performance. Despite the key role, there has been less attention to what constitutes a good theory/model and how to examine the relative worth of different theories/models. This study aims to bridge this gap by (1) proposing a set of criteria for evaluating models in HFE, (2) employing a methodological approach to utilize the proposed criteria, and (3) evaluating the existing models of trust in automation (TiA) according to the proposed criteria. The resulting work provides a reference guide for researchers to examine the existing models' performance and to make meaningful comparisons between TiA models. The results also shed light on the differences among TiA models in satisfying the criteria. While conceptual models offer valuable insights into identifying the causal factors, their limitation in operationalization poses a major challenge in terms of testability and empirical validity. On the other hand, although more readily testable and possessing higher predictive power, computational models are confined to capturing only partial causal factors and have reduced explanatory power capacity. The study concludes with recommendations that in order to advance as a scientific discipline, HFE should adopt modelling approaches that can help us understand the complexities of human performance in dynamic sociotechnical systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Evaluating the East Asian summer precipitation from the perspective of dominant intermodel spread modes and its implication for future projection.
- Author
-
Shi, Jian
- Subjects
- *
RADIATIVE forcing , *ORTHOGONAL functions , *SUMMER - Abstract
In this study, a new skill score (SS) is proposed to evaluate the performance of climatological East Asian summer precipitation (EASP) in the Coupled Model Intercomparison Project Phase 6 (CMIP6) over the historical period. By applying the empirical orthogonal function (EOF) to the EASP bias of CMIP6 models, the intermodel spread of EASP bias is revealed to be dominated by the first two modes: the uniform precipitation bias pattern and the north–south dipole precipitation bias pattern. Then the SS is constructed by the weighted‐average model‐observation distances regarding different EOF modes, where the model‐observation distance in a certain EOF mode is defined as the difference between their principal components, and the weight is the corresponding percentage variance. The perfect‐models ensemble based on the SS shows a spatial magnitude close to the observation, indicating that the SS effectively depicts the models' historical performance. However, no robust relationship is found between the model's historical performance and future projection regarding the EASP. This is because they are governed by different physical factors. The historical EASM is determined by the thermal responses to a specific radiative forcing, while the future change in EASP is associated with the warming rate along with the increased radiative forcing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Evaluation of Soil–Structure Interface Models.
- Author
-
Wang, Hai-Lin, Yin, Zhen-Yu, Jin, Yin-Fu, and Gu, Xiao-Qiang
- Subjects
- *
SOIL density , *GEOTECHNICAL engineering , *PARAMETER identification , *KAOLIN , *ELASTOPLASTICITY - Abstract
Modeling of the soil–structure interface has been a critical issue in geotechnical engineering. Numerous studies have simulated complex soil–structure interface behaviors. These models usually are assessed by direct comparisons between the simulations and experiments. However, little work has been done to compare the specific interface behaviors simulated by different interface models. This paper evaluated some frequently recognized interface behaviors for six different interface models. These models either were adopted from the existing literature or modified from the existing soil models, including the exponential model, hyperbolic model, hypoplastic model, MCC model, SANISAND model, and SIMSAND model. Global comparisons and effects of the soil density, normal stiffness, and shearing rate were investigated to evaluate the interface models based on Fontainebleau sand–steel interface experiments and kaolin clay–steel interface experiments. The limitations and advantages of different models under different conditions were discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. MODELLING THE IMPACT OF SOIL AND METEOROLOGICAL PARAMETERS ON CARBON DYNAMICS IN WETLAND ECOSYSTEMS.
- Author
-
ENACHE, Natalia, DEÁK, György, LASLO, Lucian, MATEI, Monica, HOLBAN, Elena, BOBOC, Madalina, and HARABAGIU, Alexandra
- Subjects
ECOLOGICAL disturbances ,ECOSYSTEM dynamics ,SOIL moisture ,HYDROLOGY ,SOIL temperature ,CARBON cycle ,WETLAND soils ,WETLANDS - Abstract
Wetlands are characterised by distinct hydrological regimes and have significant importance in the global carbon cycle, having the potential to reduce carbon emissions through long-term carbon storage in the soil. In this study, carbon dynamics were simulated using a process-based model DeNitrification-DeComposition (DNDC), for two locations along Dâmbovița River case study area. These scenarios took into consideration the interconnection of soil parameters, hydrology, meteorological conditions and vegetation type. The findings showed that soil CO
2 emissions are positively and strongly correlated with air temperature and soil moisture, with changes in the water content of the soil regime having the greatest impact on CO2 fluxes. Also, the model simulations have been validated by statistical analysis of uncertainties with the values of CO2 fluxes measured in situ using the dynamic closed chamber method. By comparing DNDC outputs with field measurements, the performance of the model was evaluated in different environmental conditions and the results were consistent, which increased confidence in its application for assessing wetland ecosystems. These results contribute to a more comprehensive understanding of the carbon cycle in wetlands and an improved estimation of the effects of climate change on the dynamics of carbon in these ecosystems. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
41. Comparative Analysis of Deep Learning Algorithms for Phishing Email Detection.
- Author
-
Mohamed Ali, Raweia S. and Abduhameed, Razn A.
- Subjects
DEEP learning ,MACHINE learning ,ALGORITHMS ,ARTIFICIAL intelligence ,DIGITAL technology - Abstract
Using skewed sequential data, the study explores the effectiveness of numerous sequential models designed for binary classification tasks. The dataset under investigation consists of 5,595 testing samples and 13,055 training samples, a structure that presents significant difficulties because of uneven labelling. The researchers carefully go through pretreatment procedures, which include text data encoding and effective methods for handling missing information, in order to address this. The study employs and examines a wide range of algorithms, which reflects the heterogeneous sequential modelling environment. A variety of neural network architectures are included in the arsenal: CNN, CNN-RNN, RCNN. The binary classification job at hand is used to thoroughly assess each architecture, revealing both its advantages and disadvantages. The study's evaluation approach, which presents a wide range of measures indicating consistently excellent performance overall, is its key component. Among these algorithms stand out as the best with an astounding 97% accuracy rate on a variety of evaluation metrics. This strong performance highlights their ability to handle sequential data with unbalanced labels and establishes a standard for further work in related fields. Beyond its empirical results, the study is important because it provides a well-designed assessment approach that may be used as a benchmark by practitioners facing similar problems. Through the clarification of important concepts related to model selection and performance evaluation, the study provides professionals and academics with crucial resources to efficiently traverse the complex terrain of sequential modelling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Soft metrología en analítica de datos e inteligencia artificial para la gestión de calidad manufacturera.
- Author
-
Uribe-Posada, Isabel Cristina and Delgado-Trejos, Edilson
- Subjects
LITERATURE reviews ,DATA analytics ,JOB performance ,ARTIFICIAL intelligence ,RESEARCH questions - Abstract
Copyright of Signos is the property of Universidad Santo Tomas and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
43. Predicting the risk of pulmonary infection in patients with chronic kidney failure: A-C2GH2S risk score—a retrospective study.
- Author
-
Deng, Wenqian, Liu, Chen, Cheng, Qianhui, Yang, Jingwen, Chen, Wenwen, Huang, Yao, Hu, Yu, Guan, Jiangan, Weng, Jie, Wang, Zhiyi, and Chen, Chan
- Abstract
Purpose: The objective of this study is to investigate the associated risk factors of pulmonary infection in individuals diagnosed with chronic kidney disease (CKD). The primary goal is to develop a predictive model that can anticipate the likelihood of pulmonary infection during hospitalization among CKD patients. Methods: This retrospective cohort study was conducted at two prominent tertiary teaching hospitals. Three distinct models were formulated employing three different approaches: (1) the statistics-driven model, (2) the clinical knowledge-driven model, and (3) the decision tree model. The simplest and most efficient model was obtained by comparing their predictive power, stability, and practicability. Results: This study involved a total of 971 patients, with 388 individuals comprising the modeling group and 583 individuals comprising the validation group. Three different models, namely Models A, B, and C, were utilized, resulting in the identification of seven, four, and eleven predictors, respectively. Ultimately, a statistical knowledge-driven model was selected, which exhibited a C-statistic of 0.891 (0.855–0.927) and a Brier score of 0.012. Furthermore, the Hosmer–Lemeshow test indicated that the model demonstrated good calibration. Additionally, Model A displayed a satisfactory C-statistic of 0.883 (0.856–0.911) during external validation. The statistical-driven model, known as the A-C2GH2S risk score (which incorporates factors such as albumin, C2 [previous COPD history, blood calcium], random venous blood glucose, H2 [hemoglobin, high-density lipoprotein], and smoking), was utilized to determine the risk score for the incidence rate of lung infection in patients with CKD. The findings revealed a gradual increase in the occurrence of pulmonary infections, ranging from 1.84% for individuals with an A-C2GH2S Risk Score ≤ 6, to 93.96% for those with an A-C2GH2S Risk Score ≥ 18.5. Conclusion: A predictive model comprising seven predictors was developed to forecast pulmonary infection in patients with CKD. This model is characterized by its simplicity, practicality, and it also has good specificity and sensitivity after verification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Pharmacokinetics of polymyxin B in different populations: a systematic review.
- Author
-
Wang, Xing, Xiong, Wenqiang, Zhong, Maolian, Liu, Yan, Xiong, Yuqing, Yi, Xiaoyi, Wang, Xiaosong, and Zhang, Hong
- Subjects
- *
MEDICAL information storage & retrieval systems , *KIDNEY transplantation , *PATIENTS , *TRANSPLANTATION of organs, tissues, etc. , *EXTRACORPOREAL membrane oxygenation , *LUNG transplantation , *RESEARCH funding , *DRUG resistance in microorganisms , *CATASTROPHIC illness , *HEMODIALYSIS , *DESCRIPTIVE statistics , *ACUTE kidney failure , *SYSTEMATIC reviews , *MEDLINE , *GRAM-negative bacterial diseases , *ONLINE information services , *POLYMYXIN B , *CYSTIC fibrosis , *OBESITY , *DISEASE risk factors - Abstract
Background and objectives: Despite being clinically utilized for the treatment of infections, the limited therapeutic range of polymyxin B (PMB), along with considerable interpatient variability in its pharmacokinetics and frequent occurrence of acute kidney injury, has significantly hindered its widespread utilization. Recent research on the population pharmacokinetics of PMB has provided valuable insights. This study aims to review relevant literature to establish a theoretical foundation for individualized clinical management. Methods: Follow PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, Pop-PK studies of PMB were searched in PubMed and EMBASE database systems from the inception of the database until March 2023. Result: To date, a total of 22 population-based studies have been conducted, encompassing 756 subjects across six different countries. The recruited population in these studies consisted of critically infected individuals with multidrug-resistant bacteria, patients with varying renal functions, those with cystic fibrosis, kidney or lung transplant recipients, patients undergoing extracorporeal membrane oxygenation (ECMO) or continuous renal replacement therapy (CRRT), as well as individuals with obesity or pediatric populations. Among these studies, seven employed a one-compartmental model, with the range of typical clearance (CL) and volume (Vc) being 1.18–2.5L /h and 12.09–47.2 L, respectively. Fifteen studies employed a two-compartmental model, with the ranges of the clearance (CL) and volume of the central compartment (Vc), the volume of the peripheral compartment (Vp), and the intercompartment clearance (Q) were 1.27–8.65 L/h, 5.47–38.6 L, 4.52–174.69 L, and 1.34–24.3 L/h, respectively. Primary covariates identified in these studies included creatinine clearance and body weight, while other covariates considered were CRRT, albumin, age, and SOFA scores. Internal evaluation was conducted in 19 studies, with only one study being externally validated using an independent external dataset. Conclusion: We conclude that small sample sizes, lack of multicentre collaboration, and patient homogeneity are the primary reasons for the discrepancies in the results of the current studies. In addition, most of the studies limited in the internal evaluation, which confined the implementation of model-informed precision dosing strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. SubEpiPredict: A tutorial-based primer and toolbox for fitting and forecasting growth trajectories using the ensemble n-sub-epidemic modeling framework.
- Author
-
Chowell, Gerardo, Dahal, Sushma, Bleichrodt, Amanda, Tariq, Amna, Hyman, James M., and Luo, Ruiyan
- Subjects
- *
EPIDEMICS , *COVID-19 , *EPIDEMIOLOGY , *DATA , *PLATEAUS - Abstract
An ensemble n-sub-epidemic modeling framework that integrates sub-epidemics to capture complex temporal dynamics has demonstrated powerful forecasting capability in previous works. This modeling framework can characterize complex epidemic patterns, including plateaus, epidemic resurgences, and epidemic waves characterized by multiple peaks of different sizes. In this tutorial paper, we introduce and illustrate SubEpiPredict, a user-friendly MATLAB toolbox for fitting and forecasting time series data using an ensemble n-sub-epidemic modeling framework. The toolbox can be used for model fitting, forecasting, and evaluation of model performance of the calibration and forecasting periods using metrics such as the weighted interval score (WIS). We also provide a detailed description of these methods including the concept of the n-sub-epidemic model, constructing ensemble forecasts from the top-ranking models, etc. For the illustration of the toolbox, we utilize publicly available daily COVID-19 death data at the national level for the United States. The MATLAB toolbox introduced in this paper can be very useful for a wider group of audiences, including policymakers, and can be easily utilized by those without extensive coding and modeling backgrounds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Beyond a fixed number: Investigating uncertainty in popular evaluation metrics of ensemble flood modeling using bootstrapping analysis.
- Author
-
Huang, Tao and Merwade, Venkatesh
- Subjects
EPISTEMIC uncertainty ,WATERSHEDS ,MEASUREMENT errors ,ENGINEERING - Abstract
Evaluation of the performance of flood models is a crucial step in the modeling process. Considering the limitations of single statistical metrics, such as uncertainty bounds, Nash Sutcliffe efficiency, Kling Gupta efficiency, and the coefficient of determination, which are widely used in the model evaluation, the inherent properties and sampling uncertainty in these metrics are demonstrated. A comprehensive evaluation is conducted using an ensemble of one‐dimensional Hydrologic Engineering Center's River Analysis System (HEC‐RAS) models, which account for the uncertainty associated with the channel roughness and upstream flow input, of six reaches located in Indiana and Texas of the United States. Specifically, the effects of different prior distributions of the uncertainty sources, multiple high‐flow scenarios, and various types of measurement errors in observations on the evaluation metrics are investigated using bootstrapping. Results show that the model performances based on the uniform and normal priors are comparable. The statistical distributions of all the evaluation metrics in this study are significantly different under different high‐flow scenarios, thus suggesting that the metrics should be treated as "random" variables due to both aleatory and epistemic uncertainties and conditioned on the specific flow periods of interest. Additionally, the white‐noise error in observations has the least impact on the metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Fitch风电场参数化方案入流风速的 校正及效果检验.
- Author
-
谢泽明, 余晔, 董龙翔, 马腾, and 王雪薇
- Abstract
Copyright of Plateau Meteorology is the property of Plateau Meteorology Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
48. Evaluation of CMIP6 GCMs performance and future projection for the Boro and Kharif seasons over the new alluvial zones of West Bengal.
- Author
-
GOSWAMI, PURBA, SAHA, SARATHI, DAS, LALU, and BANERJEE, SAON
- Subjects
SEASONS ,RAINFALL ,CLIMATOLOGY ,ATMOSPHERIC models - Abstract
Present study examined the overall performance of 12 CMIP6 GCMs for rainfall, maximum and minimum temperatures for rice crop-growing seasons i.e., Boro (January to May) and Kharif (June to October) over the new alluvial zone of West Bengal. A wide range of indices i.e., index of agreement, error indices and bias estimators were utilized to put more confidence on the results. Results indicated that CMIP6 models were able to reproduce observed mean climatology and inter-annual variability of maximum and minimum temperature adequately for both seasons while a smaller number of models (3-4 models) out of a total of 12 GCM-CMIP6 models showed satisfactory performance for rainfall. The ranks assigned to the models revealed that CNRM–ESM2–1 was the best-performing model for Kharif and MRI-ESM2-0 showed the highest skill for Boro. ACCESS-CM2 and MPI-ESM1-2-LR performed worst for Kharif and Boro seasons respectively. Further, CNRM–ESM2–1 and MRIESM2-0 were used to project the future climate for Kharif and Boro seasons respectively under both moderate (SSP2-4.5) and extreme scenarios (SSP5-8.5). Higher warming was projected during Boro season than Kharif. Projections revealed increasing rainfall during Kharif season but decreasing rainfall in Boro season in both the moderate and extreme future scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Mathematical vs. machine learning models for particle size distribution in fragile soils of North-Western Himalayas.
- Author
-
Bashir, Owais, Bangroo, Shabir Ahmad, Shafai, Shahid Shuja, Shah, Tajamul Islam, Kader, Shuraik, Jaufer, Lizny, Senesi, Nicola, Kuriqi, Alban, Omidvar, Negar, Naresh Kumar, Soora, Arunachalam, Ayyanadar, Michael, Ruby, Ksibi, Mohamed, Spalevic, Velibor, Sestras, Paul, Marković, Slobodan B., Billi, Paolo, Ercişli, Sezai, and Hysa, Artan
- Subjects
MACHINE learning ,PARTICLE size distribution ,STANDARD deviations ,RANDOM forest algorithms ,AKAIKE information criterion - Abstract
Purpose: Particle size distribution (PSD) assessment, which affects all physical, chemical, biological, mineralogical, and geological properties of soil, is crucial for maintaining soil sustainability. It plays a vital role in ensuring appropriate land use, fertilizer management, crop selection, and conservation practices, especially in fragile soils such as those of the North-Western Himalayas. Materials and methods: In this study, the performance of eleven mathematical and three Machine Learning (ML) models used in the past was compared to investigate PSD modeling of different soils from the North-Western Himalayan region, considering that an appropriate model must fit all PSD data. Results and discussion: Our study focuses on the significance of evaluating the goodness of fit in particle size distribution modeling using the coefficient of determination (R
2 adj = 0.79 to 0.45), the Akaike information criterion (AIC = 67 to 184), and the root mean square error (RMSE = 0.01 to 0.09). The Fredlund, Weibull, and Rosin Rammler models exhibited the best fit for all samples, while the performance of the Gompertz, S-Curve, and Van Genutchen models was poor. Of the three ML models tested, the Random Forest model performed the best (R2 = 0.99), and the SVM model was the lowest (R2 = 0.95). Thus, the PSD of the soil can be best predicted by ML approaches, especially by the Random Forest model. Conclusion: The Fredlund model exhibited the best fit among mathematical models while random forest performed best among the machine learning models. As the number of parameters in the model increased better was the accuracy. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
50. Comparative analysis of machine learning models for breast cancer prediction and diagnosis: A dual-dataset approach.
- Author
-
Awan, Muhammad Zeerak, Arif, Muhammad Shoaib, Ul Abideen, Mirza Zain, and Abodayeh, Kamaleldin
- Subjects
ARTIFICIAL neural networks ,CANCER diagnosis ,K-nearest neighbor classification ,BREAST cancer ,MACHINE learning ,SUPPORT vector machines ,COMPARATIVE studies - Abstract
Breast cancer is ranked as a significant cause of mortality among females globally. Its complex nature poses principal challenges for physicians and researchers for rapid diagnosis and prognosis. Hence, machine learning algorithms are employed to forecast and identify diseases. This study discusses the comparative analysis of seven machine learning models, e.g., logistic regression (LR), support vector machine (SVM), k-nearest neighbor classifier (KNN), decision tree classifier (DT), random forest classifier (RF), Naïve Bayes (NB), and artificial neural network (ANN) to predict breast cancer using Wisconsin breast cancer and breast cancer datasets. In the Wisconsin breast cancer dataset, KNN depicted 99% accuracy, followed by RF (98%), SVM (96%), NB (96%), LR (96%), ANN (93%), and DT (92%). On the contrary, in the breast cancer (BC) dataset, the highest accuracy was achieved by LR at 83%, and the lowest was achieved by DT (65%), which depicted that the numeric dataset WBC has better accuracy than the breast cancer dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.