178 results
Search Results
2. A magnetic-mechanical strong coupling FEM model and vibration characteristic analysis of transformer core under DC bias.
- Author
-
Wang, Yaqi, Li, Lin, Sun, Jia'an, and Zhao, Xiaojun
- Subjects
- *
BIAS correction (Topology) , *FINITE element method , *ELECTROMAGNETIC induction , *VARIATIONAL principles , *MAGNETIC fields , *MAGNETIC circuits - Abstract
The vibration of a transformer core induced by magnetostriction is closely related to the environment and safety of devices. The DC bias excitation leads to the over saturation of the transformer core and causes more severe vibration. In this paper, a magnetic-mechanical strong coupling model based on a fixed-point time-periodic 2D finite element method with excitation voltage and DC bias current as the input is established in which the vector magnetic potential A, winding current I, and displacement in the core u are simultaneously taken as solution variables, and the fixed-point reluctivity is introduced to deal with the nonlinear problem of the core. In this model, the magnetic field and mechanical field are associated with the piezomagnetic equations and variational principle, and the circuit and magnetic fields are connected through the law of electromagnetic induction. The equivalent thickness of the core model and the determination of the initial value are discussed. A test transformer with a laminated core is made for experimental measurement. The accuracy and efficiency of the model proposed are verified by comparing the calculated results with the experimental results, and the influence of DC bias components on the magnetic characteristics and vibration of the transformer core is analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Using Genetic Algorithms and Core Values of Cooperative Games to Solve Fuzzy Multiobjective Optimization Problems.
- Author
-
Wu, Hsien-Chung
- Subjects
- *
COOPERATIVE game theory , *GENETIC algorithms , *ASSIGNMENT problems (Programming) , *BIAS correction (Topology) , *GAMES - Abstract
A new methodology for solving the fuzzy multiobjective optimization problems is proposed in this paper by considering the fusion of cooperative game theory and genetic algorithm. The original fuzzy multiobjective optimization problem needs to be transformed into a scalar optimization problem, which is a conventional optimization problem. Usually, the assignments of suitable coefficients to the corresponding scalar optimization problem are subjectively determined by the decision makers. However, these assignments may cause some biases by their subjectivity. Therefore, this paper proposes a mechanical procedure to avoid this subjective biases. We are going to formulate a cooperative game using the α -level functions of the multiple fuzzy objective functions. Under this setting, the suitable coefficients can be determined mechanically by involving the core values of the cooperative game, which is formulated using the multiple fuzzy objective functions. We shall prove that the optimal solutions of the transformed scalar optimization problem are indeed the nondominated solutions of fuzzy multiobjective optimization problem. Since the core-nondominated solutions will depend on the coefficients that are determined by the core values of cooperative game, there will be a lot of core-nondominated solutions that will also depend on the corresponding coefficients. In order to obtain the best core-nondominated solution, we shall invoke the genetic algorithms by evolving the coefficients. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Record-based transmuted generalized linear exponential distribution with increasing, decreasing and bathtub shaped failure rates.
- Author
-
Arshad, Mohd, Khetan, Mukti, Kumar, Vijay, and Pathak, Ashok Kumar
- Subjects
- *
DISTRIBUTION (Probability theory) , *MONTE Carlo method , *PROBABILITY density function , *LEAST squares , *MAXIMUM likelihood statistics , *BIAS correction (Topology) , *EXPONENTIAL functions , *BAYES' estimation - Abstract
The linear exponential distribution is a generalization of the exponential and Rayleigh distributions. This distribution is one of the best models to fit data with increasing failure rate (IFR). But it does not provide a reasonable fit for modeling data with decreasing failure rate (DFR) and bathtub shaped failure rate (BTFR). To overcome this drawback, we propose a new record-based transmuted generalized linear exponential (RTGLE) distribution by using the technique of Balakrishnan and He. The family of RTGLE distributions is more flexible to fit the data sets with IFR, DFR, and BTFR, and also generalizes several well-known models as well as some new record-based transmuted models. This paper aims to study the statistical properties of RTGLE distribution, like, the shape of the probability density function and hazard function, quantile function and its applications, moments and its generating function, order and record statistics, Rényi entropy. The maximum likelihood estimators, least squares and weighted least squares estimators, Anderson-Darling estimators, Cramér-von Mises estimators of the unknown parameters are constructed and their biases and mean squared errors are reported via Monte Carlo simulation study. Finally, the real data sets illustrate the goodness of fit and applicability of the proposed distribution; hence, suitable recommendations are forwarded. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A Bayesian Approach to Uncertainty in Word Embedding Bias Estimation.
- Author
-
Dobrzeniecka, Alicja and Urbaniak, Rafal
- Subjects
- *
ESTIMATION bias , *RACE , *SOURCE code , *STATISTICAL significance , *BIAS correction (Topology) , *SAMPLE size (Statistics) - Abstract
Multiple measures, such as WEAT or MAC, attempt to quantify the magnitude of bias present in word embeddings in terms of a single-number metric. However, such metrics and the related statistical significance calculations rely on treating pre-averaged data as individual data points and utilizing bootstrapping techniques with low sample sizes. We show that similar results can be easily obtained using such methods even if the data are generated by a null model lacking the intended bias. Consequently, we argue that this approach generates false confidence. To address this issue, we propose a Bayesian alternative: hierarchical Bayesian modeling, which enables a more uncertainty-sensitive inspection of bias in word embeddings at different levels of granularity. To showcase our method, we apply it to Religion, Gender, and Race word lists from the original research, together with our control neutral word lists. We deploy the method using Google, GloVe, and Reddit embeddings. Further, we utilize our approach to evaluate a debiasing technique applied to the Reddit word embedding. Our findings reveal a more complex landscape than suggested by the proponents of single-number metrics. The datasets and source code for the paper are publicly available. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Locally Anisotropic Nonstationary Covariance Functions on the Sphere.
- Author
-
Cao, Jian, ZHANG, Jingjie, SUN, Zhuoer, and Katzfuss, Matthias
- Subjects
- *
GEOSPATIAL data , *FLEXIBLE structures , *INFERENTIAL statistics , *STATISTICAL correlation , *BIAS correction (Topology) - Abstract
Rapid developments in satellite remote-sensing technology have enabled the collection of geospatial data on a global scale, hence increasing the need for covariance functions that can capture spatial dependence on spherical domains. We propose a general method of constructing nonstationary, locally anisotropic covariance functions on the sphere based on covariance functions in R 3 . We also provide theorems that specify the conditions under which the resulting correlation function is isotropic or axially symmetric. For large datasets on the sphere commonly seen in modern applications, the Vecchia approximation is used to achieve higher scalability on statistical inference. The importance of flexible covariance structures is demonstrated numerically using simulated data and a precipitation dataset. Supplementary materials accompanying this paper appear online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. FCDS-DETR: detection transformer based on feature correction and double sampling.
- Author
-
Wang, Min, Jiao, Zhiqiang, Huang, Zhanhua, and Yu, Shihang
- Subjects
- *
BIAS correction (Topology) , *PROBLEM solving , *POINT set theory - Abstract
The recently proposed semantic-aligned matching detection transformer (SAM–DETR model) accelerates the convergence of the detection transformer (DETR) by mapping object queries into an identical embedding space as the encoder's output feature map. However, SAM–DETR model has the problem of low detection accuracy compared to other DETR variants. We observe that the lower detection accuracy of SAM–DETR model is caused by the insufficient number of sample points and the inaccurate localization of the sample points during re-sampling, which blurs the generated attention map. This paper proposes an object detector based on a feature correction and double sampling DETR (FCDS-DETR) to solve this problem. FCDS-DETR takes SAM–DETR model as a baseline and builds on it by adding a feature correction module and a double sampling mechanism to achieve further improvement in detection accuracy with a limited number of additional parameters without sacrificing convergence speed. Firstly, FCDS-DETR improves the sampling point localization accuracy by adding a feature correction module to model the inter-channel dependence of the feature maps to be sampled. Secondly, the number of sampled points is increased by the double sampling mechanism, and attention fusion is used to fuse the attention weight maps corresponding to the two sets of sampled points to improve the recognizability of the attention weight maps. The experimental results show that the average precision is improved by +0.7 on the COCO dataset compared with the SAM–DETR model, and the number of parameters is increased by only 10.34 % , which improves the detection performance of the model very well. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Fourier Ptychographic Neural Network Combined with Zernike Aberration Recovery and Wirtinger Flow Optimization.
- Author
-
Wang, Xiaoli, Lin, Zechuan, Wang, Yan, Li, Jie, Wang, Xinbo, and Wang, Hao
- Subjects
- *
ZERNIKE polynomials , *BACK propagation , *OPTICAL aberrations , *TRANSFER functions , *BIAS correction (Topology) , *OPTICAL images , *MICROSCOPY - Abstract
Fourier ptychographic microscopy, as a computational imaging method, can reconstruct high-resolution images but suffers optical aberration, which affects its imaging quality. For this reason, this paper proposes a network model for simulating the forward imaging process in the Tensorflow framework using samples and coherent transfer functions as the input. The proposed model improves the introduced Wirtinger flow algorithm, retains the central idea, simplifies the calculation process, and optimizes the update through back propagation. In addition, Zernike polynomials are used to accurately estimate aberration. The simulation and experimental results show that this method can effectively improve the accuracy of aberration correction, maintain good correction performance under complex scenes, and reduce the influence of optical aberration on imaging quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Asymptotic properties of Spearman's footrule and Gini's gamma in bivariate normal model.
- Author
-
Chen, Changrun, Xu, Weichao, Zhang, Weifeng, Zhu, Hongbin, and Dai, Jisheng
- Subjects
- *
ASYMPTOTIC efficiencies , *TIME delay estimation , *STATISTICAL correlation , *SIGNAL processing , *BIAS correction (Topology) - Abstract
This paper aims at rejuvenating the two rank correlation coefficients, Spearman's footrule (SF) and Gini's gamma (GG), which were forgotten in the literature for a long time due to lack of knowledge concerning their statistical properties. Under the common bivariate normal model, we establish the asymptotic analytical expressions of the mean and variance of SF and GG, and investigate the performances of SF and GG from the aspects of biased effect, approximate variance and asymptotic relative efficiency (ARE). Moreover, we further study the robustness of SF and GG under contaminated normal models. In order to get a deeper understanding of their performances, we also compare SF and GG with Kendall's tau (KT) and Spearman's rho (SR), the most widely used rank correlation coefficients, in terms of bias and mean square error (MSE) under both the normal and contaminated normal models. Finally we show an application of SF and GG in the field of signal processing through the example of time-delay estimation. Simulation results indicate that SF and GG outperform SR and KT in some cases. The new findings discovered in this paper enable SF and GG to play complementary roles to KT and SR in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Adaptive scatter kernel deconvolution modeling for cone‐beam CT scatter correction via deep reinforcement learning.
- Author
-
Piao, Zun, Deng, Wenxin, Huang, Shuang, Lin, Guoqin, Qin, Peishan, Li, Xu, Wu, Wangjiang, Qi, Mengke, Zhou, Linghong, Li, Bin, Ma, Jianhui, and Xu, Yuan
- Subjects
- *
DEEP reinforcement learning , *CONE beam computed tomography , *BIAS correction (Topology) , *DEEP learning , *REINFORCEMENT learning , *PHOTON scattering , *SIGNAL-to-noise ratio - Abstract
Background: Scattering photons can seriously contaminate cone‐beam CT (CBCT) image quality with severe artifacts and substantial degradation of CT value accuracy, which is a major concern limiting the widespread application of CBCT in the medical field. The scatter kernel deconvolution (SKD) method commonly used in clinic requires a Monte Carlo (MC) simulation to determine numerous quality‐related kernel parameters, and it cannot realize intelligent scatter kernel parameter optimization, causing limited accuracy of scatter estimation. Purpose: Aiming at improving the scatter estimation accuracy of the SKD algorithm, an intelligent scatter correction framework integrating the SKD with deep reinforcement learning (DRL) scheme is proposed. Methods: Our method firstly builds a scatter kernel model to iteratively convolve with raw projections, and then the deep Q‐network of the DRL scheme is introduced to intelligently interact with the scatter kernel to achieve a projection adaptive parameter optimization. The potential of the proposed framework is demonstrated on CBCT head and pelvis simulation data and experimental CBCT measurement data. Furthermore, we have implemented the U‐net based scatter estimation approach for comparison. Results: The simulation study demonstrates that the mean absolute percentage error (MAPE) of the proposed method is less than 9.72% and the peak signal‐to‐noise ratio (PSNR) is higher than 23.90 dB, while for the conventional SKD algorithm, the minimum MAPE is 17.92% and the maximum PSNR is 19.32 dB. In the measurement study, we adopt a hardware‐based beam stop array algorithm to obtain the scatter‐free projections as a comparison baseline, and our method can achieve superior performance with MAPE < 17.79% and PSNR > 16.34 dB. Conclusions: In this paper, we propose an intelligent scatter correction framework that integrates the physical scatter kernel model with DRL algorithm, which has the potential to improve the accuracy of the clinical scatter correction method to obtain better CBCT imaging quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. U-net based vortex detection in Bose–Einstein condensates with automatic correction for manually mislabeled data.
- Author
-
Ye, Jing, Huang, Yue, and Liu, Keyan
- Subjects
- *
BOSE-Einstein condensation , *CONDENSED matter physics , *MACHINE learning , *BIAS correction (Topology) , *IMAGE recognition (Computer vision) , *VORTEX methods , *PETRI nets - Abstract
Quantum vortices in Bose–Einstein condensates (BECs) are essential phenomena in condensed matter physics, and precisely locating their positions, especially the vortex core, is a precondition for studying their properties. With the rise of machine learning, there is a possibility to expedite the localization process and provide accurate predictions. However, traditional machine learning requires particular considerable amount of manual data annotation, leading to uncontrollable accuracy. In this paper, we utilize the U-Net method to detect vortex positions accurately at the pixel level and propose an Automatic Correction Labeling (ACL) approach to optimize the acquisition of data sets for vortex localization in BECs. This approach addresses inaccuracies in the labeled vortex positions and improves the accuracy of vortex localization, especially the vortex core positions, while enhancing the tolerance for human mislabeling. The main process involves Rough Labeling → Machine Learning → Probability Region Search → Data Relabeling → Machine Learning again. The objective of ACL is to secure more accurate labeled data for model retraining. Through vortex localization experiments conducted in a two-dimensional Bose-Einstein condensate, our results establish the following: 1. Even under conditions of biased and missing manual annotations, U-Net can still accurately locate vortex positions; 2. Vortices exhibit certain regularities, and training U-Net with a small number of samples yields excellent predictive consequences; 3. The machine learning vortex locator based on the ACL method effectively corrects errors in manually annotated data, significantly improving the model's performance metrics, thus enhancing the precision and metrics of vortex localization. This substantial advancement in the application of machine learning in vortex localization provides an effective way for vortex dynamics localization. Furthermore, this method of obtaining more accurate positions of approximate human labels through machine learning offers new insights for machine learning in other types of image recognition problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Editorial June 2024 (Vol 40, Issue 6).
- Author
-
Magnenat-Thalmann, Nadia
- Subjects
- *
POSE estimation (Computer vision) , *OBJECT recognition (Computer vision) , *BIAS correction (Topology) , *HIGH dynamic range imaging , *OPTICAL remote sensing - Abstract
The June 2024 issue of Visual Computer includes six best papers from the Cyberworlds 2022 conference, five best papers from the second part of the special issue on Deep Learning for 3D Segmentation, and 34 regular papers. The selected papers from the Cyberworlds conference cover topics such as procedural modeling, context-aware personality estimation, immersive haptic simulation, video compression-cum-classification, semantics-aware addition and LoD of 3D window details, and military education in extended reality. The papers from the Deep Learning for 3D Segmentation section focus on point cloud semantic segmentation, structure-preserving image smoothing, multimodal feature fusion network, unsupervised contrastive learning, and 6D pose estimation of reflective texture-less objects. The regular papers cover a wide range of topics in computer graphics and visualization. The issue is edited by Nadia Magnenat Thalmann, the Editor-in-Chief of the Visual Computer. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
13. Correction to: On statistical convergence and strong Cesàro convergence by moduli for double sequences.
- Author
-
León-Saavedra, Fernando, Listán-García, María del Carmen, and Romero de la Rosa, María del Pilar
- Subjects
- *
LOGIC , *BIAS correction (Topology) , *TECHNOLOGY convergence - Abstract
We correct a logic mistake in our paper "On statistical convergence and strong Cesàro convergence by moduli for double sequences" (León-Saavedra et al. in J. Inequal. Appl. 2022:62, 2022). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Image‐based scatter correction for cone‐beam CT using flip swin transformer U‐shape network.
- Author
-
Zhang, Xueren, Jiang, Yangkang, Luo, Chen, Li, Dengwang, Niu, Tianye, and Yu, Gang
- Subjects
- *
TRANSFORMER models , *CONE beam computed tomography , *CONVOLUTIONAL neural networks , *IMAGE-guided radiation therapy , *STANDARD deviations , *IMAGE reconstruction algorithms , *BIAS correction (Topology) - Abstract
Background: Cone beam computed tomography (CBCT) plays an increasingly important role in image‐guided radiation therapy. However, the image quality of CBCT is severely degraded by excessive scatter contamination, especially in the abdominal region, hindering its further applications in radiation therapy. Purpose: To restore low‐quality CBCT images contaminated by scatter signals, a scatter correction algorithm combining the advantages of convolutional neural networks (CNN) and Swin Transformer is proposed. Methods: In this paper a scatter correction model for CBCT image, the Flip Swin Transformer U‐shape network (FSTUNet) model, is proposed. In this model, the advantages of CNN in texture detail and Swin Transformer in global correlation are used to accurately extract shallow and deep features, respectively. Instead of using the original Swin Transformer tandem structure, we build the Flip Swin Transformer Block to achieve a more powerful inter‐window association extraction. The validity and clinical relevance of the method is demonstrated through extensive experiments on a Monte Carlo (MC) simulation dataset and frequency split dataset generated by a validated method, respectively. Result: Experimental results on the MC simulated dataset show that the root mean square error of images corrected by the method is reduced from over 100 HU to about 7 HU. Both the structural similarity index measure (SSIM) and the universal quality index (UQI) are close to 1. Experimental results on the frequency split dataset demonstrate that the method not only corrects shading artifacts but also exhibits a high degree of structural consistency. In addition, comparison experiments show that FSTUNet outperforms UNet, Deep Residual Convolutional Neural Network (DRCNN), DSENet, Pix2pixGAN, and 3DUnet methods in both qualitative and quantitative metrics. Conclusions: Accurately capturing the features at different levels is greatly beneficial for reconstructing high‐quality scatter‐free images. The proposed FSTUNet method is an effective solution to CBCT scatter correction and has the potential to improve the accuracy of CBCT image‐guided radiation therapy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. 基于深度学习的参考作物蒸散量预测模型.
- Author
-
潘振华, 刘子菡, 沈 欣, 张钟莉莉, 史凯丽, and 张石锐
- Subjects
- *
DEEP learning , *PREDICTION models , *METEOROLOGICAL stations , *SENSITIVITY analysis , *CONSTRUCTION costs , *BIAS correction (Topology) - Abstract
In order to predict crop reference transpiration(ET0) scientifically and accurately, improve the prediction accuracy, and reduce the number of input variables, so as to reduce the construction cost of intelligent water-saving irrigation system, in this paper, deep learning and artificial neural network methods were used to establish ET0 intelligent prediction models, and local sensitivity analysis, fuzzy curve, and fuzzy surface methods were used to study the influence of each input variable in ET0 prediction on the prediction results. Based on the influence factor size, eight input combinations of different meteorological factors were constructed. The daily meteorological data of Rizhao weather station were used to train and test the prediction models with different methods and input variable combinations. The results of Penman formula were used as reference to evaluate the performance of the prediction models. The results showed that the R2 of the deep learning prediction model was 0.980, which was higher than that of the artificial neural network model which was 0.963 when the complete variables were used as the prediction input, and higher prediction accuracy was obtained. For ET0 prediction of default input variables, the performance of the deep learning prediction model was superior to that of the artificial neural network, and the R2 of the deep learning prediction model with the input parameters of average temperature and Sunshine duration still reached 0.935, indicating that the deep learning prediction model could still obtain good prediction results when there were only a few meteorological parameters. Through comprehensive analysis of R2, RMSE, RMSRE, MRE, MAPE, and other results, it could be seen that if the regional climate data of the study was limited, the deep learning model with input combinations of(n, T, RH, Tmin, Ws) and (n, T, RH, Ws) could be used to predict, compared with the results calculated by Penman formula, the MAPE values were 8.753 and 8.404, respectively, and the R2 values were both greater than 0.98, which could be used as a standard prediction model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Probability-guaranteed state estimation for nonlinear delayed systems under mixed attacks.
- Author
-
Yi, Xiaojian, Yu, Huiyang, Fang, Ziying, and Ma, Lifeng
- Subjects
- *
NONLINEAR estimation , *TIME delay estimation , *NONLINEAR systems , *HATE crimes , *CYBERTERRORISM , *TIME-varying systems , *BIAS correction (Topology) - Abstract
In this paper, the problem of the networked set-membership state estimation is discussed for a class of nonlinear discrete time-varying systems subject to cyber attacks and time delays. Two forms of malicious attacks (i.e. Denial-of-Service (DoS) attack and bias injection attack) are taken into account to describe the adversary's attempt to destroy/deteriorate the system performance via communication network. It is the aim of the investigated issue to propose a set-membership state estimator ensuring the required estimation performance despite the existence of both the external mixed attacks and internal time delays. By resorting to the feasibility of a series of matrix inequalities, sufficient conditions are provided for the solvability of the addressed state estimator design problem. Furthermore, an optimisation strategy is developed with the purpose of seeking the local optimal estimator parameters. At last, a numerical simulation example is presented to demonstrate the effectiveness of the proposed theoretical algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Image-based bolt self-localization and bolt-loosening detection using deep learning and an improved homography-based prospective rectification method.
- Author
-
Xie, ChengQian, Luo, Jun, Tang, KaiSen, and Zhong, Yongli
- Subjects
- *
DEEP learning , *MACHINE learning , *LOCALIZATION (Mathematics) , *BIAS correction (Topology) , *FLANGES - Abstract
The bolt loosening detection method has been paid attention by engineering and academic scholars. The presented methods only focus on the identification of the bolt based on deep learning, and the problem of bolt localization and the influence of shadow on distortion correction is seldom studied. In this paper, a bolt self-localization method based on YOLOv4 deep learning algorithm is proposed. A bolt numbering rule is established and the deep learning is introduced to identify the number and locate the bolt. A new square bolt gasket is proposed and four corner points are used for distortion correction. In order to reduce the influence of shadows, a grayscale enhancement strategy is proposed to improve the correction stability. Finally, a laboratory flange joint is used to verify the proposed method. The results show that the bolt self-localization method is feasible and the new bolt gasket can effectively improve the stability of distortion correction and bolt loosening detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. BDS-3 Triple-Frequency Timing Group Delay/Differential Code Bias and Its Effect on Positioning.
- Author
-
Du, Yanjun, Yang, Yuanxi, Jia, Xiaolin, Yao, Wanqiang, Li, Jiahao, and Li, Qin
- Subjects
- *
BEIDOU satellite navigation system , *GLOBAL Positioning System , *BIAS correction (Topology) , *AMBIGUITY - Abstract
BeiDou Global Navigation Satellite System (BDS-3) broadcasts multifrequency signals that offer more choices of frequencies and more signal combinations for positioning. This paper analyzes the effect of timing group delay (TGD) and differential code bias (DCB) of BDS-3 on the corresponding triple-frequency positioning. The triple-frequency observation models of BDS-3 are summarized and the DCB correction models are derived for the four different frequency combinations of triple-frequency ionospheric-free (IF) combination (IF123), two dual-frequency IF combinations (IF1213) and triple-frequency uncombined (UC123) positioning modes. Standard point positioning (SPP) and precise point positioning (PPP) experiments were conducted using 30 days of observations from 25 multi-GNSS experiment (MGEX) stations. The results show that the TGD/DCB correction has a significant impact on the accuracy of SPP. The positioning accuracy using IF123 and IF1213 models improved by about 73~90% after TGD correction, in comparison to a 27~30% improvement achieved using the UC123 model. In addition, the correction effect of DCB is slightly better than TGD. The DCB correction significantly improves accuracy in the initial epoch of the PPP, which helps the convergence of the filtering and reduces the convergence time. The average convergence times of IF123, IF1213 and UC123 are 26.1, 26.9 and 38.3 min, respectively, which are reduced by 6.79, 2.54 and 8.59% with DCB correction. The pseudorange residuals are closer to zero-mean random noise after DCB correction. Furthermore, the DCB affects the evaluation of the inter-frequency bias (IFB), ionospheric delay and floating ambiguity parameters. However, the tropospheric delay is almost unaffected by DCB. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Multiplicative Noise Removal and Contrast Enhancement for SAR Images Based on a Total Fractional-Order Variation Model.
- Author
-
Zhou, Yamei, Li, Yao, Guo, Zhichang, Wu, Boying, and Zhang, Dazhi
- Subjects
- *
BIAS correction (Topology) , *IMAGE intensifiers , *DISCRETE Fourier transforms , *NOISE - Abstract
In this paper, we propose a total fractional-order variation model for multiplicative noise removal and contrast enhancement of real SAR images. Inspired by the high dynamic intensity range of SAR images, the full content of the SAR images is preserved by normalizing the original data in this model. Then, we propose a degradation model based on the nonlinear transformation to adjust the intensity of image pixel values. With MAP estimator, a corresponding fidelity term is introduced into the model, which is beneficial for contrast enhancement and bias correction in the denoising process. For the regularization term, a gray level indicator is used as a weighted matrix to make the model adaptive. We first apply the scalar auxiliary variable algorithm to solve the proposed model and prove the convergence of the algorithm. By virtue of the discrete Fourier transform (DFT), the model is solved by an iterative scheme in the frequency domain. Experimental results show that the proposed model can enhance the contrast of natural and SAR images while removing multiplicative noise. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. A Hybrid Feature Pyramid CNN-LSTM Model with Seasonal Inflection Month Correction for Medium- and Long-Term Power Load Forecasting.
- Author
-
Cheng, Zizhen, Wang, Li, and Yang, Yumeng
- Subjects
- *
LOAD forecasting (Electric power systems) , *PYRAMIDS , *INFLECTION (Grammar) , *FORECASTING , *SEASONS , *FEATURE extraction , *BIAS correction (Topology) - Abstract
Accurate medium- and long-term power load forecasting is of great significance for the scientific planning and safe operation of power systems. Monthly power load has multiscale time series correlation and seasonality. The existing models face the problems of insufficient feature extraction and a large volume of prediction models constructed according to seasons. Therefore, a hybrid feature pyramid CNN-LSTM model with seasonal inflection month correction for medium- and long-term power load forecasting is proposed. The model is constructed based on linear and nonlinear combination forecasting. With the aim to address the insufficient extraction of multiscale temporal correlation in load, a time series feature pyramid structure based on causal dilated convolution is proposed, and the accuracy of the model is improved by feature extraction and fusion of different scales. For the problem that the model volume of seasonal prediction is too large, a seasonal inflection monthly load correction strategy is proposed to construct a unified model to predict and correct the monthly load of the seasonal change inflection point, so as to improve the model's ability to deal with seasonality. The model proposed in this paper is verified on the actual power data in Shaoxing City. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Tuning Random Forests for Causal Inference under Cluster-Level Unmeasured Confounding.
- Author
-
Suk, Youmi and Kang, Hyunseung
- Subjects
- *
CAUSAL inference , *RANDOM forest algorithms , *MACHINE learning , *BIAS correction (Topology) , *LOGISTIC regression analysis , *CONFOUNDING variables - Abstract
Recently, there has been growing interest in using machine learning methods for causal inference due to their automatic and flexible ability to model the propensity score and the outcome model. However, almost all the machine learning methods for causal inference have been studied under the assumption of no unmeasured confounding and there is little work on handling omitted/unmeasured variable bias. This paper focuses on a machine learning method based on random forests known as Causal Forests and presents five simple modifications for tuning Causal Forests so that they are robust to cluster-level unmeasured confounding. Our simulation study finds that adjusting the default tuning procedure with the propensity score from fixed effects logistic regression or using variables that are centered to their cluster means produces estimates that are more robust to cluster-level unmeasured confounding. Also, when these parametric propensity score models are mis-specified, our modified machine learning methods remain robust to bias from cluster-level unmeasured confounders compared to existing parametric approaches based on propensity score weighting. We conclude by demonstrating our proposals in a real data study concerning the effect of taking an eighth-grade algebra course on math achievement scores from the Early Childhood Longitudinal Study. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Comparison of different estimation methods for extreme value distribution.
- Author
-
Yılmaz, Asuman, Kara, Mahmut, and Özdemir, Onur
- Subjects
- *
DISTRIBUTION (Probability theory) , *MARKOV chain Monte Carlo , *MAXIMUM likelihood statistics , *EXTREME value theory , *BIAS correction (Topology) , *LEAST squares , *GIBBS sampling - Abstract
The extreme value distribution was developed for modeling extreme-order statistics or extreme events. In this study, we discuss the distribution of the largest extreme. The main objective of this paper is to determine the best estimators of the unknown parameters of the extreme value distribution. Thus, both classical and Bayesian methods are used. The classical estimation methods under consideration are maximum likelihood estimators, moment's estimators, least squares estimators, and weighted least squares estimators, percentile estimators, the ordinary least squares estimators, best linear unbiased estimators, L-moments estimators, trimmed L-moments estimators, and Bain and Engelhardt estimators. We also propose new estimators for the unknown parameters. Bayesian estimators of the parameters are derived by using Lindley's approximation and Markov Chain Monte Carlo methods. The asymptotic confidence intervals are considered by using maximum likelihood estimators. The Bayesian credible intervals are also obtained by using Gibbs sampling. The performances of these estimation methods are compared with respect to their biases and mean square errors through a simulation study. The maximum daily flood discharge (annual) data sets of the Meriç River and Feather River are analyzed at the end of the study for a better understanding of the methods presented in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. Glimpses are forever in RC4 amidst the spectre of biases.
- Author
-
Chakraborty, Chandratop, Chakraborty, Pranab, and Maitra, Subhamoy
- Subjects
- *
STREAM ciphers , *SCHOOL children , *CRYPTOGRAPHY , *EVIDENCE , *BIAS correction (Topology) - Abstract
In this paper we exploit elementary combinatorial techniques to settle different cryptanalytic observations on RC4 that remained unproved for more than two decades. At the same time, we present new observations with theoretical proofs. We first prove the biases (non-randomness) presented by Fluhrer and McGrew (FSE 2000) two decades ago. It is surprising that though the biases have been published long back, and there are many applications of them in cryptanalysis till recent days as well, the proofs have never been presented. In this paper, we complete that task and also show that any such bias immediately provides a glimpse of hidden variables in RC4. Further, we take up the biases of two non-consecutive key-stream bytes skipping one byte in between. We show the incompleteness of such a result presented by SenGupta et al. (JoC, 2013) and provide new observations and proofs in this direction relating the key-stream bytes and glimpses. Similarly, we streamline certain missed observation in the famous Glimpse theorem presented by Jenkins in 1996. Our results point out how biases of RC4 key-stream and the Glimpses of the RC4 hidden variables are related. It is evident from our results that the biases and glimpses are everywhere in RC4 and it needs further investigation as we provide very high magnitude of glimpses that were not known earlier. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Improved point estimation for inverse gamma regression models.
- Author
-
Magalhães, Tiago M., Gallardo, Diego I., and Bourguignon, Marcelo
- Subjects
- *
FIX-point estimation , *REGRESSION analysis , *BIAS correction (Topology) , *MONTE Carlo method , *PARAMETER estimation - Abstract
This paper develops a bias correction scheme for reparametrized inverse gamma regression models with varying precision [Bourguignon M, Gallardo DI. Reparametrized inverse gamma regression models with varying precision. Stat Neerl. 2020;74(4):611–627], which is tailored to situations where the response variable has an asymmetrical shape on the positive real line. In particular, we discuss maximum-likelihood estimation for the model parameters and derive closed-form expressions for the first-order bias of the estimators. The expressions derived are simple and only require the definition of a few matrices. This enables us to obtain corrected estimators that are approximately unbiased. We conduct an extensive Monte Carlo simulation study to evaluate the performance of the proposed corrected estimators. Finally, we apply the results obtained in three real-world datasets. This paper contains Supplementary Material. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. A combination model for displacement prediction of high arch dams stacking five kinds of temperature factors.
- Author
-
Chai, Bingao and Wang, Shaowei
- Subjects
- *
ARCH dams , *STANDARD deviations , *SUPPORT vector machines , *PREDICTION models , *ATMOSPHERIC temperature , *HARMONIC functions , *BIAS correction (Topology) - Abstract
The statically indeterminate characteristics of arch dams highlight the temperature deformation effect, making accurate modelling of this effect a key issue in improving the performance of displacement monitoring models. In this paper, causal interpretation ability and prediction accuracy of five kinds of temperature deformation modelling factors, including seasonal harmonic function, segmented average previous air temperature, air temperature hysteresis correction factor, principal components and shape feature clustering-based principal components of measured dam temperatures, are compared. On this basis, a combination prediction model is established using the above five causal models as submodels. The combination process is conducted by three methods of dynamic mutual information coefficient, random forest and support vector machine. Research results of the Jinping-I arch dam show that the shape feature clustering-based temperature principal components can significantly improve the accuracy and adaptability of displacement monitoring models, in which the root mean square error decreases with an average rate of 52%. The combination prediction model can effectively take the advantages of different kinds of temperature deformation modelling factors into account. Compared with the hydraulic-seasonal-time model and the best submodel, prediction accuracy of the support vector machine-based combination model is improved with an average rate of 54% and 28%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Small sample bias correction or bias reduction?
- Author
-
Zhang, Xuemao, Paul, Sudhir, and Wang, You-Gan
- Subjects
- *
BIAS correction (Topology) , *GENERALIZED estimating equations , *ESTUARIES , *MEDICAL sciences , *DATA analysis - Abstract
Many problems in biomedical and other sciences are subject to biased estimates (maximum likelihood or of similar types). In two seminal papers Cox and Snell (1968) and Firth (1993) deal with first order bias of maximum likelihood estimates. Cox and Snell obtain a correction term that corrects, approximately, first order bias and Firth uses an adjustment to the score function; the solution of the estimating equation obtained by solving the adjusted score function to zero, removes the first order bias of the maximum likelihood estimates approximately. In many applications authors use one of these two procedures for bias correction without being aware that the other exists or whether these two procedures are equivalent. In this paper we investigate the equivalence issue of the two methods through theoretical analysis, simulation study and data analysis. We show that the two methods yield either exactly the same estimates or that the preventive method has some edge over the other. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. The polynomial-exponential distribution: a continuous probability model allowing for occurrence of zero values.
- Author
-
Chesneau, Christophe, Bakouch, Hassan S., Ramos, Pedro L., and Louzada, Francisco
- Subjects
- *
DISTRIBUTION (Probability theory) , *CONTINUOUS distributions , *MAXIMUM likelihood statistics , *WEIBULL distribution , *STATISTICAL reliability , *LOGNORMAL distribution , *BIAS correction (Topology) - Abstract
This paper deals with a new two-parameter lifetime distribution with increasing, decreasing and constant hazard rate. This distribution allows the occurrence of zero values and involves the exponential, linear exponential and other combinations of Weibull distributions as submodels. Many statistical properties of the distribution are derived. Maximum likelihood estimation of the parameters and a bias corrective approach is investigated with a simulation study for performance of the estimators. Four real data sets are analyzed for illustrative purposes and it is noted that the distribution is a highly alternative to the gamma, Weibull, Lognormal and exponentiated exponential distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Geometric correction of satellite stereo images by DEM matching without ground control points and map projection step: tested on Cartosat-1 images.
- Author
-
Afsharnia, Hamed, Arefi, Hossein, and Abbasi, Madjid
- Subjects
- *
MAP projection , *REMOTE-sensing images , *STEREO image , *IMAGE registration , *BIAS correction (Topology) , *GEOMETRIC modeling - Abstract
Ground Control Points (GCPs) are needed in most geometric processing of satellite images. Generally used geometric model is based on the use of rational polynomials. The Coefficients of Rational Polynomials (RPCs) are provided by image vendors which are contaminated by some biases. In this paper, a no-GCP bias correction method for RPCs of satellite stereo images is introduced. The method uses global DEMs as ground control information: First, a point cloud is generated from stereo pair, then using the DEM matching strategy it is aligned to the global DEM for estimation of 3D rigid transformation parameters. This transformation is performed with our originally developed method which separates three planimetric parameters from three leveling parameters. They are then employed for bias correction in object space. For DEM matching and also for bias correction, we developed new formulae given directly in geodetic longitude and latitude format, instead of Cartesian map projection coordinates. Numerical results of this research are reported in two categories: with or without (1) GCPs and (2) map projection step. Experiments on two Cartosat-1 stereo pairs show in category (1), improvement in geopositioning accuracy from 399.2 m and 124.0 m to 7.6 m and 2.6 m, respectively. In category (2), we observed RMS improvement in both datasets and in all components up to 3.1 m in longitude, 6.8 m in latitude and 1.2 m in height. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Non-gravitational force measurement and correction by a precision inertial sensor of TianQin-1 satellite.
- Author
-
Zhou, An-Nan, Cai, Lin, Xiao, Chun-Yu, Tan, Ding-Yin, Li, Hong-Yin, Bai, Yan-Zheng, Zhou, Ze-Bing, and Luo, Jun
- Subjects
- *
ORBIT determination , *DETECTORS , *ORBITS of artificial satellites , *GRAVITATIONAL waves , *BIAS correction (Topology) - Abstract
Non-gravitational force models are critical not only for the applications of satellite orbit determination and prediction, but also for the studies of gravitational reference sensors in space-based gravitational wave detection missions and accelerometers in gravity satellite missions. In this paper, based on the inertial sensor data from the TianQin-1 (TQ-1) mission, a correction has been made in the non-gravitational force models by applying additional terms related to the orbital periods. After taking into account this correction, about 37 hours of TQ-1 inertial sensor data is calibrated in the sensitive axes, i.e. y - and z -axes, by comparing with the simulated non-gravitational accelerations. It is indicated that the peak-to-peak value of the non-gravitational acceleration correction terms are about 2% and 13% of the measured accelerations in the y - and z -axes, respectively. Within the frequency band below 0.01 Hz, the root mean square of calibration residual errors in y - and z -axes are suppressed from 1.03 Ă— 10â'9 and 3.872 Ă— 10â'9 m sâ'2 to 8.14 Ă— 10â'10 and 1.343 Ă— 10â'9 m sâ'2, respectively. The bias and scale factor of the inertial sensor are also obtained from the calibration by the method of least-squares fit. Meanwhile, the inertial sensor measurements are validated and their signal compositions are analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. 利用偏差改正的方差分量估计方法确定 联合反演相对权比.
- Author
-
王乐洋, 谷旺旺, 赵 雄, 许光煜, and 高 华
- Subjects
- *
LEAST squares , *EARTHQUAKES , *ACCOUNTING methods , *BIAS correction (Topology) - Abstract
Objectives: When using variance component estimation to determine the relative weight ratio, the least square solution is generally used as the initial value of the iteration. In geodetic joint inversion, the least squares method will cause ill ⁃ posed problems, so the regularized solution is used instead of the least square solution. Regularization introduces bias to reduce variance, but when using variance component estimation to determine the relative weight ratio, the influence of bias is not considered, and the introduction of bias will cause inaccurate variance component estimation. Methods: This paper adopts the bias - corrected variance component estimation method to eliminate the influence of bias introduced by regularization. The residual - based bias - corrected variance component estimation and the variance component estimation method are used for simulation experiments, the Visso earthquake and Norcia earthquake are used for verification. Results and Conclusions: Simulation experimental result shows that the variance component estimation method after bias correction can better reverse the slip distribution. The bias-corrected variance component estimation method takes into account the bias introduced by the iterative initial value, and the theory is more rigorous. Two real earthquakes results show that the bias correction is reasonable and advantageous. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Future changes in the risk of compound hot and dry events over China estimated with two large ensembles.
- Author
-
Tang, Zhenfei, Yang, Ting, Lin, Xin, Li, Xinxin, Cao, Rong, and Li, Wei
- Subjects
- *
COPULA functions , *GLOBAL warming , *BIAS correction (Topology) - Abstract
Under the context of global warming, compound dry and hot events (CDHEs) will increase and bring serious losses to society and the economy. The projection of CDHEs is of great significance for policy-making and risk assessment. In this paper, two large ensemble simulations, CanESM2-LE and CESM-LE, are used to estimate the risk of extreme CDHEs under different warming scenarios in China. First, the biases of the model in the simulation of the temperature and precipitation over the China region are corrected, and the index of CDHEs is established based on a copula function. The results show that extreme CDHEs will occur more often in China with the increase in global warming and the more severe extreme CDHEs are, the greater the risk will be in the future with higher uncertainties. Events that would be attained once every 50 and 100 years in the current climate from CESM-LE (CAanESM2-LE) will be 1.2/1.6 (1.1/1.5) times and 1.3/2.3 (1.5/2.0) times more likely to occur in a 1.5°C/2.0°C warmer climate, respectively. Northwestern China will experience the greatest increase in the risk of extreme CDHEs. Extreme CDHEs expected once every 100 years in the current period over NW China are expected to occur approximately every 5 and 4 years under a 4.0°C warmer world in CanESM2-LE and CESM-LE, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Evaluation of Image Features Within and Surrounding Lesion Region for Risk Stratification in Breast Ultrasound Images.
- Author
-
Panigrahi, Lipismita, Verma, Kesari, and Singh, Bikesh Kumar
- Subjects
- *
BIAS correction (Topology) , *BREAST ultrasound , *ULTRASONIC imaging , *COMPUTER-aided diagnosis , *SPECKLE interference , *BREAST imaging , *IMAGE segmentation , *IMAGE representation - Abstract
Feature extraction and classification plays a crucial role in the automated analysis of breast ultrasound (BUS) images. Due to varying sonographic characteristics of benign and malignant lesions, the texture and shape features are mostly employed for designing computer-aided diagnosis (CAD) systems of BUS images. The existing CAD systems use features that are extracted either from the lesion segmented area obtained through segmentation techniques or a rectangular region of interest (ROI) extracted under the guidance of expert Radiologists. However, the significance of features extracted from region comprising only the lesion area is still little explored. This paper investigates the significance of features extracted from the lesion area, lesion surrounding area and rectangular ROI for classification of BUS images. The experiments were conducted on the database of 294 BUS images (104 benign and 190 malignant). Initially, the acquired BUS images were preprocessed through speckle reducing anisotropic diffusion (SRAD) for speckle noise removal. The preprocessed images are segmented using a hybrid segmentation approach including a combination of region-based active contour driven by region-scalable fitting (RBACM-RSF) model and multi-scale Gaussian kernel fuzzy c-means clustering with spatial bias correction (MsGKFCM_S) for getting ROI confined area. The segmented images were further partitioned into two parts (lesion area and lesion surrounding area). Subsequently, a total of 457 texture and shape attributes were extracted from within the lesion area, lesion surrounding area and rectangular ROI comprising of both lesion and its surrounding area. The significance of these features is evaluated using different classifiers (i.e. support vector machine (SVM), Back-propagation artificial neural network (BPANN), Random Forest, AdaBoost). The results indicate that features extracted from within lesion area achieve a maximum classification accuracy of 98.980% with the lowest computational time when linear kernel-based SVM is used. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Koopman neural operator as a mesh-free solver of non-linear partial differential equations.
- Author
-
Xiong, Wei, Huang, Xiaomeng, Zhang, Ziyang, Deng, Ruixuan, Sun, Pei, and Tian, Yang
- Subjects
- *
PARTIAL differential equations , *NONLINEAR differential equations , *NONLINEAR equations , *PARTIAL differential operators , *RAYLEIGH-Benard convection , *BIAS correction (Topology) , *TRANSPORT equation - Abstract
The lacking of analytic solutions of diverse partial differential equations (PDEs) gives birth to a series of computational techniques for numerical solutions. Although numerous latest advances are accomplished in developing neural operators, a kind of neural-network-based PDE solver, these solvers become less accurate and explainable while learning long-term behaviors of non-linear PDE families. In this paper, we propose the Koopman neural operator (KNO), a new neural operator, to overcome these challenges. With the same objective of learning an infinite-dimensional mapping between Banach spaces that serves as the solution operator of the target PDE family, our approach differs from existing models by formulating a non-linear dynamic system of equation solution. By approximating the Koopman operator, an infinite-dimensional operator governing all possible observations of the dynamic system, to act on the flow mapping of the dynamic system, we can equivalently learn the solution of a non-linear PDE family by solving simple linear prediction problems. We validate the KNO in mesh-independent, long-term, and zero-shot predictions on five representative PDEs (e.g., the Navier-Stokes equation and the Rayleigh-Bénard convection) and three real dynamic systems (e.g., global water vapor patterns and western boundary currents). In these experiments, the KNO exhibits notable advantages compared with previous state-of-the-art models, suggesting the potential of the KNO in supporting diverse science and engineering applications (e.g., PDE solving, turbulence modeling, and precipitation forecasting). • We propose a new neural operator model for partial differential equation (PDE) solving and realize an effective linear prediction of non-linear dynamics via Koopman operator. • Our approach achieves robust and accurate modeling of the long-term non-linear dynamics of PDE families or real systems. • Our model may serve as a basic unit for constructing new frameworks of PDE solving and dynamic system modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Accuracy analysis and applications of the Sterling interpolation method for nonlinear function error propagation.
- Author
-
Wang, Leyang and Zou, Chuanyi
- Subjects
- *
NONLINEAR functions , *ERROR functions , *INTERPOLATION , *POUND sterling , *BIAS correction (Topology) , *COVARIANCE matrices - Abstract
• Sterling interpolation can make the mean and covariance matrix achieve second-order precision. • Optimal value of h is 3 when calculate the covariance matrix by Sterling interpolation. • First one to use Sterling interpolation for nonlinear function error propagation. • Introduce Sterling interpolation into precision estimation of Geodesy and calculation of bias. The differential is replaced by the Sterling interpolation method with a finite difference, without calculating the first and second derivatives of the nonlinear function. The mean and variance or covariance matrix of the nonlinear function can then be obtained with the similar precision as that of the second-order Taylor expansion by the Sterling interpolation method. Existing studies neither give accurate proof on the means and covariance matrices of the multiple nonlinear functions solved by the Sterling interpolation method that can achieve second-order precision nor give a theoretical proof of the choice of the step factor h. In this paper, the mean of any nonlinear function calculated by the Sterling interpolation method that can be achieved second-order precision is proposed by formula deduction. The optimal value of step factor h is 3 , which is researched from the perspective of formula deduction when using the Sterling interpolation method to calculate the variance or covariance matrix of nonlinear functions. As the mean of dependent variable of nonlinear function can be calculated by the Sterling interpolation through error propagation, the Sterling interpolation method is used to calculate the bias of random quantity. The Sterling interpolation method is applied in a forward intersection, a positive computation of Gaussian projection coordinates and the bias of displacements of the rectangular dislocation model. The case studies in this paper indicate the correctness of the theoretical proof and selection of the step factor, and the applicability and advantages of the Sterling interpolation method for error propagation and calculation of bias are verified. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
35. Maximum Likelihood Estimation for Parameters of Extended Burr XII Distribution and Bias Correction Based on K-Records.
- Author
-
Salmasi, M. Rajaei
- Subjects
- *
MAXIMUM likelihood statistics , *PARAMETER estimation , *BIAS correction (Topology) - Abstract
Our aim at this paper is to estimate the parameters of extended Burr XII (EBXII) based on k-records with corrected bias. To get this goal maximum likelihood estimator (MLE) is applied and bias correction for ML estimation is considered to get better and improved estimator. Numerical results for comparison of performances of estimators are also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. RETRACTED ARTICLE: Combination Weighting Method of Engineering Disciplines Evaluation Index Based on Soft Computing.
- Author
-
Shuang, Yongqiang and Ding, Yunlong
- Subjects
- *
METHODS engineering , *SOFT computing , *ANALYTIC hierarchy process , *FUZZY numbers , *BIAS correction (Topology) , *BLOCKCHAINS , *WEIGHING instruments - Abstract
The development of engineering disciplines is aided by research into constructing a characteristic index system, particularly when the blockchain technology is used. The key issue is how to choose multi-level evaluation indicators reasonably while also scientifically defining the weights of indicators at all levels. Subjective empowerment methods are insufficient in terms of subjective influence, whereas objective empowerment methods necessitate a large sample size, a practical problem domain, and complex calculation methods. In response to the need for characteristic index system construction and evaluation research, this paper identifies four first-level indicators, eleven second-level indicators, and twenty-one third-level indicators as the main evaluation dimensions including academic achievements, discipline strength, talent training, and international development. The proposed method combines the Fuzzy Analytic Hierarchy Process (AHP), based on triangular fuzzy numbers, with the critic weighting analysis method. It is establishing a multi-level evaluation index system to propose targeted combination weighting methods for the engineering disciplines. To avoid evaluation bias caused by the single use of subjective or objective weighting methods, the difference coefficient method is used for combined weighting based on subjective and objective information to calculate the weighting results. The experimental and modelling data show that the calculation and evaluation results of the algorithm proposed in this paper are promising and applicable in multiple domains where there is a hierarchy in the problem domain and multiple parameters participate in decision making. With the evaluation results of the proposed approach, the subjective weight obtained is 0.047, and the objective weight obtained is 0.253. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Nature-based solutions for climate change mitigation: Assessing the Scottish Public's preferences for saltmarsh carbon storage.
- Author
-
Riegel, Simone, Kuhfuss, Laure, and Stojanovic, Timothy
- Subjects
- *
CLIMATE change mitigation , *INDIVIDUALS' preferences , *ECOSYSTEM services , *GOVERNMENT policy on climate change , *WILLINGNESS to pay , *CLIMATE change , *BIAS correction (Topology) - Abstract
The saltmarsh carbon storage potential is a key topic in blue carbon research and climate policy. Ecosystem service valuations provide valuable information to policymakers for habitat management and climate change mitigation policies. Yet, only few saltmarsh valuation studies have included the carbon storage service in the UK context. This paper investigates how the public values saltmarsh ecosystem services, focussing on the carbon storage service. We used a choice experiment to elicit the willingness to pay (WTP) of a representative sample of the Scottish public to support interventions that would maintain or improve the provision of these services. Furthermore, we tested the effect of information on individuals' preferences and WTP with a split sample approach where one group received a treatment in the form of additional information. We found that (i) all attributes had a significant influence on individuals' choices; (ii) both groups had, on average, a positive marginal WTP for all presented ecosystem services; (iii) the treated sample had, on average, no significantly different marginal WTP for carbon storage than the control group. This paper adds to the limited literature on the saltmarsh carbon storage ecosystem service and demonstrates a developed nation's public's openness to nature-based climate change mitigation solutions. [Display omitted] • The Scottish public on average prefers saltmarsh management over the status quo. • Providing additional information reduced heterogeneity in WTP for carbon storage. • Most respondents preferred an improvement of all saltmarsh ecosystem services. • Therefore management should focus on both climate change mitigation and adaptation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. An Innovative Bias-Correction Approach to CMA-GD Hourly Quantitative Precipitation Forecasts.
- Author
-
LIU Jin-qing, DAI Guang-feng, and OU Xiao-feng
- Subjects
- *
PRECIPITATION forecasting , *MARINE meteorology , *RAINSTORMS , *FORECASTING , *BIAS correction (Topology) - Abstract
This paper proposes a simple and powerful optimal integration (OPI) method for improving hourly quantitative precipitation forecasts (QPFs, 0-24 h) of a single-model by integrating the benefits of different biascorrected methods using the high-resolution CMA-GD model from the Guangzhou Institute of Tropical and Marine Meteorology of China Meteorological Administration (CMA). Three techniques are used to generate multi-method calibrated members for OPI: deep neural network (DNN), frequency-matching (FM), and optimal threat score (OTS). The results are as follows: (1) The QPF using DNN follows the basic physical patterns of CMA-GD. Despite providing superior improvements for clear-rainy and weak precipitation, DNN cannot improve the predictions for severe precipitation, while OTS can significantly strengthen these predictions. As a result, DNN and OTS are the optimal members to be incorporated into OPI. (2) Our new approach achieves state-of-the-art performances on a single model for all magnitudes of precipitation. Compared with the CMA-GD, OPI improves the TS by 2.5%, 5.4%, 7.8%, 8.3%, and 6.1% for QPFs from clear-rainy to rainstorms in the verification dataset. Moreover, OPI shows good stability in the test dataset. (3) It is also noted that the rainstorm pattern of OPI relies heavily on the original model and that OPI cannot correct for deviations in the location of severe precipitation. Therefore, improvements in predicting severe precipitation using this method should be further realized by improving the numerical model's forecasting capability. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Estimating high-spatial resolution surface daily longwave radiation from the instantaneous Global LAnd Surface Satellite (GLASS) longwave radiation product.
- Author
-
Zeng, Qi and Cheng, Jie
- Subjects
- *
STANDARD deviations , *RADIATION , *BIAS correction (Topology) - Abstract
In this paper, time extension methods, originally designed for clear-sky land surface conditions, are used to estimate high-spatial resolution surface daily longwave (LW) radiation from the instantaneous Global LAnd Surface Satellite (GLASS) longwave radiation product. The performance of four time methods were first tested by using ground based flux measurements that were collected from 141 global sites. Combined with the accuracy of daily LW radiation estimated from the instantaneous GLASS LW radiation, the linear sine interpolation method performs better than the other methods and was employed to estimate the daily LW radiation as follows: The bias/Root Mean Square Error (RMSE) of the linear sine interpolation method were −6.30/15.10 W/m2 for the daily longwave upward radiation (LWUP), −1.65/27.63 W/m2 for the daily longwave downward radiation (LWDN), and 4.69/26.42 W/m2 for the daily net longwave radiation (LWNR). We found that the lengths of the diurnal cycle of LW radiation are longer than the durations between sunrise and sunset and we proposed increasing the day length by 1.5 h. The accuracies of daily LW radiation were improved after adjusting the day length. The bias/RMSE were −4.15/13.74 W/m2 for the daily LWUP, −1.3/27.52 W/m2 for the daily LWDN, and 2.85/25.91 W/m2 for the daily LWNR. We are producing long-term surface daily LW radiation values from the GLASS LW radiation product. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Machine Learning for Precipitation Forecasts Postprocessing: Multimodel Comparison and Experimental Investigation.
- Author
-
Zhang, Yuhang and Ye, Aizhong
- Subjects
- *
PRECIPITATION forecasting , *MACHINE learning , *HYDROLOGICAL forecasting , *STATISTICAL models , *WATERSHEDS , *BIAS correction (Topology) - Abstract
Obtaining high-quality quantitative precipitation forecasts is a key precondition for hydrological forecast systems. Due to multisource uncertainties (e.g., initial conditions, model structures, and parameters), raw forecasts are subject to systematic biases; hence, statistical postprocessing is often required to reduce these errors before the forecasts can proceed to hydrological applications. Machine learning (ML) algorithms are canonical statistical models, and they are diverse in type and variation. It is important to verify and compare their performance in the same scenario (e.g., precipitation postprocessing). In this paper, we conduct a large-scale comparison study for the major ML models with diverse model structures and regularization strategies as postprocessors for improving the quality of precipitation forecasts. Specifically, we compare the efficiency and effectiveness of 21 ML algorithms on solving this task. Daily reforecast precipitation with lead times up to 8 days from the Global Ensemble Forecast System and corresponding observations are employed to determine the usability of different models in the Yalong River basin in China. The performance of each model is validated by a group of carefully designed experiments and statistical metrics. The results reveal that improvements in model structures are more effective than regularization strategies. Among these algorithms, the optimized extra-trees regressor exhibits the best performance, effectively reducing overestimation and achieving the best skill in forecasting precipitation. Eleven ensemble members and a 3-day forward-rolling time window can be used as predictors to obtain the best model performance. The systematic experiments and findings also offer useful guidelines for other related studies. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Real-time GPS receiver bias estimation.
- Author
-
Kenpankho, Prasert, Chaichana, Amornchai, Trachu, Koson, Supnithi, Pornchai, and Hozumi, Kornyanat
- Subjects
- *
GPS receivers , *ESTIMATION bias , *BIAS correction (Topology) , *INTEGRAL functions , *STANDARD deviations , *INTERPOLATION - Abstract
In this paper, we present the new method for real-time GPS receiver bias estimation by using Lagrange interpolation, which is also compared to the two current methods, polynomial and minimization of standard deviation. The estimated method is proposed to reduce the complexity and time of the GPS receiver bias estimation. Lagrange interpolation is the method to find the derivatives and integrals of discrete functions in GPS receiver bias data. The test site is located on Chumphon station, Thailand. The test period of data method is during the year 2004–2019. In the quiet and disturbed days, the polynomial method gives the highest value of the GPS receiver bias at −5.75 ns and −4.25 ns, respectively, but the Lagrange interpolation shows the lowest value of GPS receiver bias at −6.85 ns and −5.25 ns, in order. The results and comparisons among the polynomial GPS receiver bias method, the minimization of standard deviation of GPS receiver bias method, and Lagrange interpolation method show that the calculated time for Lagrange interpolation is shorter compared to other methods and it can be given more time points for finding GPS receiver biases than others. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Classification of breast cancer histopathological image with deep residual learning.
- Author
-
Hu, Chuhan, Sun, Xiaoyan, Yuan, Zhenming, and Wu, Yingfei
- Subjects
- *
DEEP learning , *BREAST cancer , *HISTOPATHOLOGY , *DATA augmentation , *TUMOR classification , *CONVOLUTIONAL neural networks , *BIAS correction (Topology) , *AFFINE transformations - Abstract
Breast cancer has high incidences and mortality rates in women worldwide. Malignancy could be detected manually by experienced pathologists based on Hematoxylin and Eosin (H&E) stained images. However, it is time‐consuming and experience‐dependent, making early diagnosis a big challenge. In this paper, a methodology for breast cancer classification based on histopathological images with deep learning was described. A residual learning‐based convolutional neural network named myResNet‐34 was designed for malignancy‐and‐benign classification. In addition, an algorithm automatically generating the target image for stain normalization was proposed, which eliminated the bias caused by manual selection of the reference image. Elastic distortion was introduced and combined with affine transformation for data augmentation considering the characteristics of the H&E images. Experiments were conducted on BreakHis dataset with the proposed framework. Promising results were achieved with an average classification accuracy of around 91% on image‐level classification. Results indicated that both our data augmentation and stain normalization effectively improved the classification accuracy by 2‐3%. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Correction to: On statistical convergence and strong Cesàro convergence by moduli.
- Author
-
León-Saavedra, Fernando, Listán-García, M. del Carmen, Pérez Fernández, Francisco Javier, and Romero de la Rosa, María del Pilar
- Subjects
- *
LOGIC , *BIAS correction (Topology) , *TECHNOLOGY convergence - Abstract
We correct a logic mistake in our paper "On statistical convergence and strong Cesàro convergence by moduli" (León-Saavedra et al. in J. Inequal. Appl. 23:298, 2019). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Improved empirical likelihood inference and variable selection for generalized linear models with longitudinal nonignorable dropouts.
- Author
-
Wang, Lei and Ma, Wei
- Subjects
- *
CONFIDENCE regions (Mathematics) , *GENERALIZED estimating equations , *GENERALIZED method of moments , *BIAS correction (Topology) , *CONFIDENCE intervals , *INFERENTIAL statistics - Abstract
In this paper, we propose improved statistical inference and variable selection methods for generalized linear models based on empirical likelihood approach that accommodates both the within-subject correlations and nonignorable dropouts. We first apply the generalized method of moments to estimate the parameters in the nonignorable dropout propensity based on an instrument. The inverse probability weighting is applied to obtain the bias-corrected generalized estimating equations (GEEs), and then we borrow the idea of quadratic inference function and hybrid GEE to construct the empirical likelihood procedures for longitudinal data with nonignorable dropouts, respectively. Two different classes of estimators and their confidence regions are derived. Further, the penalized EL method and algorithm for variable selection are investigated. The finite-sample performance of the proposed estimators is studied through simulation, and an application to HIV-CD4 data set is also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. Modeling and optimization of ionospheric model coefficients based on adjusted spherical harmonics function.
- Author
-
Dabbakuti, J.R.K.Kumar
- Subjects
- *
SPHERICAL functions , *SPHERICAL harmonics , *GLOBAL Positioning System , *HARMONIC functions , *ARTIFICIAL satellites in navigation , *MAGNETIC storms , *BIAS correction (Topology) - Abstract
The range error caused by ionospheric delay in Global Positioning System (GPS) signals is currently the main factor that affected the positioning accuracy and navigation determination. Ionospheric modeling and optimization of its range error is a practical approach to improve the GPS positioning accuracy. The global ionospheric models are mostly unable to predict ionospheric delay corrections over the India region due to the Equatorial Ionization Anomaly (EIA)/fountain effect and lack of equatorial/low latitude GPS stations data. This paper proposes a method to facilitates regional ionospheric delay corrections based on the Adjusted Spherical Harmonic Function (ASHF) model by considering fewer coefficients into the algorithm. The ionospheric correction is driven with low-resolution harmonics coefficients with order ≤2 (≤9 coefficients), and coefficients are estimated with the Modified Gram-Schmidt (MGS) approach. The performance of the proposed model (ASHF) is compared with the Spherical Harmonic Function and Klobuchar models. Preliminary results reveal that the low-resolution ASHF model improved the ionospheric time delay corrections by 12.98% relative to the Klobuchar model. The results could be useful in the context of ionospheric time delay modeling for regional navigation satellite systems. • Estimation of ionospheric delays is proposed based on Orthogonal Dimensionality Reduction. • Model outcomes are assessed by comparing with the Weighted Least Square estimations. • The SVD-SHF method improves the accuracy of the model results in geomagnetic storm conditions. • The SVD-SHF method resulted in dimensional reduction and low computational load. • The outcomes suggest that the model implementation is used for GNSS and surveying applications. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. Operational bias correction for PM2.5 using the AIRPACT air quality forecast system in the Pacific Northwest.
- Author
-
June, Nicole, Vaughan, Joseph, Lee, Yunha, and Lamb, Brian K.
- Subjects
- *
AIR quality , *AIR quality management , *FORECASTING , *BIAS correction (Topology) , *KALMAN filtering , *PARTICULATE matter - Abstract
A bias correction scheme based on a Kalman filter (KF) method has been developed and implemented for the AIRPACT air quality forecast system which operates daily for the Pacific Northwest. The KF method was used to correct hourly rolling 24-h average PM2.5 concentrations forecast at each monitoring site within the AIRPACT domain and the corrected forecasts were evaluated using observed daily PM2.5 24-h average concentrations from 2017 to 2018. The evaluation showed that the KF method reduced mean daily bias from approximately −50% to ±6% on a monthly averaged basis, and the corrected results also exhibited much smaller mean absolute errors typically less than 20%. These improvements were also apparent for the top 10 worst PM2.5 days during the 2017–2018 test period, including months with intensive wildfire events. Significant differences in AIRPACT performance among urban, suburban, and rural monitoring sites were greatly reduced in the KF bias correction forecasts. The daily 24-h average bias corrections for each monitoring site were interpolated to model grid points using three different interpolation schemes: cubic spline, Gaussian Kriging, and linear Kriging. The interpolated results were more accurate than the original AIRPACT forecasts, and both Kriging methods were better than the cubic spline method. The Gaussian method yielded smaller mean biases and the linear method yielded smaller absolute errors. The KF bias correction method has been implemented operationally using both Kriging interpolation methods for routine output on the AIRPACT website (). This method is relatively easy to implement, but very effective to improve air quality forecast performance. Implications: Current chemical transport models, including CMAQ, used for air quality forecasting can have large errors and uncertainties in simulated PM2.5 concentrations. In this paper, we describe a relatively simple bias correction scheme applied to the AIRPACT air quality forecast system for the Pacific Northwest. The bias correction yields much more accurate and reliable PM2.5 results compared to the normal forecast system. As such, the operational bias corrected forecasts will provide a much better basis for daily air quality management by agencies within the region. The bias corrected results also highlight issues to guide further improvements to the normal forecast system. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
47. An intraperiod arbitrary ramping-rate changing model in unit commitment.
- Author
-
Dong, Jizhe, Li, Yuanhan, Zuo, Shi, Wu, Xiaomei, Zhang, Zuyao, and Du, Jiang
- Subjects
- *
RENEWABLE energy sources , *BIAS correction (Topology) , *DYNAMIC models - Abstract
In traditional unit commitment models, the ramping process of coal-fired generators is often represented by either a static average ramping rate or a once-changing rate model. However, this approach fails to accurately capture the actual ramping process of the unit, resulting in scheduling biases, particularly in systems with renewable energy sources. To address this issue, we propose a dynamic piecewise linear ramping (DPWLR) model that allows arbitrary multi-changes of ramping rates within a single period. We shift our focus from slope changes to time changes. By using a series of location indicator variables that satisfy the type 2 special ordered set, we locate the forward and backward time change limits and then determine the up and down power output limits of the units at each hour. We tested the intraperiod multiple-changing DPWLR model, as part of the unit commitment model, on different systems, including a 3-unit system, a 10-unit system, the IEEE RTS-79, and a 100-unit system. Comparative analysis demonstrates the functionality and superior performance of our proposed model. • A dynamic ramping model to simulate the unit's ramping process is proposed. • The model allows multiple ramping rate changes in one period. • The model uses location indicator variables to identify the ramping limits. • Some reproducibility studies on previous papers are carried out. • Comparison studies between the proposed model and the existing model are conducted. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Locating and tracking of underwater sphere target based on active electrosense.
- Author
-
Peng, Haoran, Jiang, Guangyu, Hu, Qiao, Fu, Tongqiang, and Xu, Dan
- Subjects
- *
MULTIPLE Signal Classification , *SPHERES , *MUSICAL performance , *LEAST squares , *BIAS correction (Topology) - Abstract
Underwater active electrosense serves as a valuable complement to conventional underwater target detection technologies, such as acoustic and optics, by offering advantages in short-range and murky environment. However, there is a need to enhance the precision of target location and enrich the research on tracking moving target. This paper delves in to the investigation of locating and tracking underwater sphere target using active electrosense. To improve location accuracy, an approach is proposed to generate a global adaptive model correction coefficient through calibration and general regression neural network (GRNN) fitting. A detection system and sensing array with vertically arranged transmitting and receiving electrodes are designed for experimental research, enabling high-resolution target locating and tracking without the need of any active movement of the array. In the scenario where the target is metallic sphere with a diameter of 50 mm, the results of location experiment indicate that the least square (LS) algorithm achieved 24.7% higher location performance than the multiple signal classification (MUSIC) algorithm under an accurate uniform model correction coefficient, and the location error of the LS algorithm was further reduced by 33.2% using the global adaptive model correction coefficient generated by proposed approach. Based upon these findings, tracking functionality based on discrete location method is achieved under various trajectories and velocities. [Display omitted] • A sensing array with vertically oriented transmitting and receiving electrodes was designed. • The sensing array enables high-resolution of target locating without any active movement. • The performances of the LS and MUSIC algorithms were studied by location experiments. • GRNN is utilized to generate model correction coefficient for location accuracy improving. • Tacking of target were achieved under various of trajectories and velocities. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Spatiotemporal characteristics and estimates of extreme precipitation in the Yangtze River Basin using GLDAS data.
- Author
-
Chen, Zeqiang, Zeng, Yi, Shen, Gaoyun, Xiao, Changjiang, Xu, Lei, and Chen, Nengcheng
- Subjects
- *
WATERSHEDS , *ARID regions , *TRIANGLES , *ESTIMATES , *BIAS correction (Topology) - Abstract
The Yangtze River Basin has periodically been subject to torrential rains and floods. It is of great significance to characterize the extreme precipitation patterns to learn about frequent flood characteristics in the Yangtze River Basin. Commonly, spatiotemporal characteristics of extreme precipitation was studied by regional frequency analysis method with site data. Spatial sparse site data may cause imprecise divisions of homogeneous regions. In this paper, the spatiotemporal characteristics of extreme precipitation was studied by regional frequency analysis with corrected satellite‐based grid precipitation data (Global Land Data Assimilation System, GLDAS) rather than site data. The results show that: (1) The corrected GLDAS daily precipitation data had greatly improved its ability to capture extreme precipitation events in Yangtze River Basin, as the data average accuracy increased from 0.215 before correction to 0.849 after correction. It is feasible to use satellite‐based grid precipitation data to replace the site data for the regional frequency analysis of extreme precipitation. (2) The Yangtze River Basin was categorized into seven homogeneous regions for the annual maximum 1‐day (RX1DAY) index with an automatic subjective adjustment method. (3) The regional growth curves and quantiles of the Yangtze River Basin were drawn for the return period for 2–100 years. (4) Spatial patterns of extreme daily precipitation series with a return period of 100 years indicated that the precipitation amount increases gradually from the upper to the lower Yangtze River Basin, from the "arid zone" to the "wet zone" and then to the "special wet zone", and the 100‐year return level of RX1DAY varied from 30.3 to 301.8 mm. There were three main precipitation centres, the Sichuan Basin, Dongting Lake Basin, and a great triangle area covering the Poyang Lake Basin and the south foot of Dabie Mountain. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Efficient algorithms for covariate analysis with dynamic data using nonlinear mixed-effects model.
- Author
-
Yuan, Min, Zhu, Zhi, Yang, Yaning, Zhao, Minghua, Sasser, Kate, Hamadeh, Hisham, Pinheiro, Jose, and Xu, Xu Steven
- Subjects
- *
RANDOM effects model , *DATA analysis , *REGRESSION analysis , *ALGORITHMS , *NONLINEAR equations , *BIAS correction (Topology) , *LOGITS - Abstract
Nonlinear mixed-effects modeling is one of the most popular tools for analyzing repeated measurement data, particularly for applications in the biomedical fields. Multiple integration and nonlinear optimization are the two major challenges for likelihood-based methods in nonlinear mixed-effects modeling. To solve these problems, approaches based on empirical Bayesian estimates have been proposed by breaking the problem into a nonlinear mixed-effects model with no covariates and a linear regression model without random effect. This approach is time-efficient as it involves no covariates in the nonlinear optimization. However, covariate effects based on empirical Bayesian estimates are underestimated and the bias depends on the extent of shrinkage. Marginal correction method has been proposed to correct the bias caused by shrinkage to some extent. However, the marginal approach appears to be suboptimal when testing covariate effects on multiple model parameters, a situation that is often encountered in real-world data analysis. In addition, the marginal approach cannot correct the inaccuracy in the associated p -values. In this paper, we proposed a simultaneous correction method (nSCEBE), which can handle the situation where covariate analysis is performed on multiple model parameters. Simulation studies and real data analysis showed that nSCEBE is accurate and efficient for both effect-size estimation and p -value calculation compared with the existing methods. Importantly, nSCEBE can be >2000 times faster than the standard mixed-effects models, potentially allowing utilization for high-dimension covariate analysis for longitudinal or repeated measured outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.