4,471 results on '"BAYES' estimation"'
Search Results
2. Factor Graphs for Navigation Applications: A Tutorial.
- Author
-
Taylor, Clark and Gross, Jason
- Subjects
- *
BAYESIAN analysis , *KALMAN filtering , *BAYES' estimation , *LINEAR algebra , *NAVIGATION - Abstract
This tutorial presents the factor graph, a recently introduced estimation framework that is a generalization of the Kalman filter. An approach for constructing a factor graph, with its associated optimization problem and efficient sparse linear algebra formulation, is described. A comparison with Kalman filters is presented, together with examples of the generality of factor graphs. A brief survey of previous applications of factor graphs to navigation problems is also presented. Source code for the extended Kalman filter comparison and for generating the graphs in this paper is available at https://github.com/cntaylor/ factorGraph2DsatelliteExample. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Tests of human auditory temporal resolution: Simulations of Bayesian threshold estimation for auditory gap detection.
- Author
-
Mori, Shuji, Murata, Yuto, Morimoto, Takashi, Okamoto, Yasuhide, and Kanzaki, Sho
- Subjects
BAYES' estimation ,COMPUTER simulation ,STANDARD deviations ,PROBABILITY density function ,ARITHMETIC mean - Abstract
In an attempt to develop tests of auditory temporal resolution using gap detection, we conducted computer simulations of Zippy Estimation by Sequential Testing (ZEST), an adaptive Bayesian threshold estimation procedure, for measuring gap detection thresholds. The results showed that the measures of efficiency and precision of ZEST changed with the mean and standard deviation (SD) of the initial probability density function implemented in ZEST. Appropriate combinations of mean and SD values led to efficient ZEST performance; i.e., the threshold estimates converged to their true values after 10 to 15 trials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Statistical Modeling of Indus River Outflow at Tarbela Dam using Generalized Gumbel Type 2 Distribution.
- Author
-
Ateeq, Kahkashan, Qasim, Tahira Bano, and Kiran, Wajeeha
- Subjects
BAYES' estimation ,FLOOD control ,EXTREME value theory ,STREAMFLOW ,RESOURCE allocation ,RAYLEIGH model - Abstract
The Indus River, a lifeline for Pakistan, holds paramount significance for its geography, history, and economy. This research delves into a comprehensive analysis of the river's behavior by introducing a novel statistical framework. Combining the Gumbel Type 2 distribution and the Rayleigh distribution, a new generalized Gumbel Type 2 (GG2) distribution is derived, and used for modeling the data about the river's outflow at the Tarbela Dam during 2020–2021. Our study contributes to the understanding of the complex dynamics of the Indus River. The GG2 distribution, designed for extreme value events, adept at modeling positive-valued variables, were combined to model the complexed characteristics of the river's flow. Parameters are estimated using both classical and Bayesian methods, enhancing the accuracy and reliability of our findings. The Bayse estimators are not in the closed form expression, hence the Tierney-Kadane approximation technique is used. Through simulation study and analysis, the data set of overflows of the river Indus over Tarbela Dam, the Bayes estimators demonstrate superior performance in minimizing risk compared to classical estimators. It is shown graphically that that our proposed distribution performs better than its competitor distributions. The results not only deepen our understanding of the river's behavior but also offer insights crucial for infrastructure planning, flood control, and resource allocation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Bayesian estimation and prediction under progressive-stress accelerated life test for a log-logistic model.
- Author
-
Mahto, Amulya Kumar, Tripathi, Yogesh Mani, Dey, Sanku, Alsaedi, Basim S.O., Alhelali, Marwan H., Alghamdi, Fatimah M., Alrumayh, Amani, and Alshawarbeh, Etaf
- Subjects
MONTE Carlo method ,CENSORING (Statistics) ,MAXIMUM likelihood statistics ,EIGENFUNCTIONS ,ACCELERATED life testing ,ENTROPY ,BAYES' estimation - Abstract
Accelerated life tests play a very critical role in reliability analysis because highly reliable products are being produced with recent advanced technologies to sustain the market demand and competition. A progressive-stress accelerated life test is one of the kinds of accelerated life tests that allows applied stress to change continuously. Considering the importance of progressive-stress accelerated life tests, this paper deals with progressive-stress accelerated life tests when the lifetimes of units follow the log-logistic distribution and are progressively type-II censored where the associated scale parameter conforms to the inverse power law. Different estimators of model parameters are derived using maximum likelihood and Bayesian methods. Interval estimation is also considered. In the sequel, approximate confidence, bootstrap, and Bayes credible intervals are constructed. Bayes estimators are obtained under squared error, LINEX, and entropy loss functions using proper and improper prior distributions. A simulation study is conducted based on various censoring schemes. The coverage percentages and interval widths are computed via Monte Carlo simulations. Bayes predictive estimates and intervals are also obtained. Finally, two different accelerated life test data are analyzed for illustration purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Robust estimation with exponential squared loss for partially linear panel data model with fixed effects.
- Author
-
He, Ping, Yang, Yiping, and Zhao, Peixin
- Subjects
- *
PANEL analysis , *BAYES' estimation , *FIXED effects model , *MATRIX exponential , *DATA modeling , *DUMMY variables - Abstract
In this article, a robust estimation method is proposed for a partially linear panel data model with fixed effects. We eliminate the fixed effects based on auxiliary linear regression, then approximate the unknown non parametric component with B-spline function, and obtain the robust estimators of the parametric and non parametric components by combining projection matrix with exponential squared loss function. Under some regularity conditions, the asymptotic properties of the resulting estimators are proved. Some simulation studies illustrate that the proposed method is more robust than the semiparametric least squares dummy variable estimator. The proposed procedure is illustrated by a real data application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Objective Bayesian analysis of Marshall-Olkin bivariate Weibull distribution with partial information.
- Author
-
Panwar, M. S. and Barnwal, Vikas
- Subjects
- *
WEIBULL distribution , *BAYESIAN analysis , *BIVARIATE analysis , *BAYES' estimation , *COMPETING risks , *DIABETIC retinopathy - Abstract
In competing risks problem, a subset of risks is needed more attention for inferential purposes. In the objective Bayesian paradigm, reference priors enable to achieve such inferential objectives. In this article, the Marshall-Olkin bivariate Weibull distribution is considered to model the competing risks data. In the availability of partial information for some of the parameters, the reference priors are derived as per the importance of the parameters. The Dirichlet prior is taken as a conditional subjective prior and the marginal reference prior has been derived. Also, the propriety of the resulting posterior density has been proved. The Bayesian estimates of the parameters are obtained under squared error and linear-exponential loss functions. Further, the derived reference prior is used for the computation of Bayes factors or posterior odds in testing the hypothesis that the competing risks are identical. The performance of established Bayesian estimators is illustrated using the Diabetic Retinopathy Study (DRS) and Prostate Cancer data sets. Finally, the model compatibility is done for the considered data sets under Bayesian Paradigm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Bayesian estimation of inverse weibull distribution scale parameter under the different loss functions.
- Author
-
BABACAN, Esin KÖKSAL
- Subjects
- *
WEIBULL distribution , *MAXIMUM likelihood statistics , *ERROR functions , *EXPONENTIAL functions , *PARAMETER estimation , *BAYES' estimation - Abstract
In this paper, the Bayesian estimators for the Inverse Weibull Distribution (IWD) scale parameter are derived when the shape parameter of distribution is known. The Bayesian estimators for the parameter are obtained by using the Gamma prior under the different types of loss functions such as square error loss function (Self), Entropy loss function (Elf), Precautionary loss function (Plf), Linear exponential loss function (Linexlf) and nonlinear exponential loss function (Nlinexlf). A classical maximum likelihood estimator (mle) for the parameter is also derived. To compare the efficiency of the parameter estimation methods, a simulation study is carried out. The comparison is based on mean square error. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Bayesian and E-Bayesian Estimation for a Modified Topp Leone–Chen Distribution Based on a Progressive Type-II Censoring Scheme.
- Author
-
Kalantan, Zakiah I., Swielum, Eman M., AL-Sayed, Neama T., EL-Helbawy, Abeer A., AL-Dayian, Gannat R., and Abd Elaal, Mervat
- Subjects
- *
CENSORING (Statistics) , *BAYES' estimation , *ERROR functions , *SYMMETRIC functions , *EXPONENTIAL functions - Abstract
This paper is concerned with applying the Bayesian and E-Bayesian approaches to estimating the unknown parameters of the modified Topp–Leone–Chen distribution under a progressive Type-II censored sample plan. The paper explores the complexities of different estimating methods and investigates the behavior of the estimates through some computations. The Bayes and E-Bayes estimators are obtained under two distinct loss functions, the balanced squared error loss function, as a symmetric loss function, and the balanced linear exponential loss function, as an asymmetric loss function. The estimators are derived using gamma prior and uniform hyperprior distributions. A numerical illustration is given to examine the theoretical results through using the Metropolis–Hastings algorithm of the Markov chain Monte Carlo method of simulation by the R programming language. Finally, real-life data sets are applied to prove the flexibility and applicability of the model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Dynamical system identification, model selection, and model uncertainty quantification by Bayesian inference.
- Author
-
Niven, Robert K., Cordier, Laurent, Mohammad-Djafari, Ali, Abel, Markus, and Quade, Markus
- Subjects
- *
DYNAMICAL systems , *TIKHONOV regularization , *SYSTEM identification , *BAYESIAN field theory , *RANDOM noise theory , *BAYES' estimation - Abstract
This study presents a Bayesian maximum a posteriori (MAP) framework for dynamical system identification from time-series data. This is shown to be equivalent to a generalized Tikhonov regularization, providing a rational justification for the choice of the residual and regularization terms, respectively, from the negative logarithms of the likelihood and prior distributions. In addition to the estimation of model coefficients, the Bayesian interpretation gives access to the full apparatus for Bayesian inference, including the ranking of models, the quantification of model uncertainties, and the estimation of unknown (nuisance) hyperparameters. Two Bayesian algorithms, joint MAP and variational Bayesian approximation, are compared to the least absolute shrinkage and selection operator (LASSO), ridge regression, and the sparse identification of nonlinear dynamics (SINDy) algorithms for sparse regression by application to several dynamical systems with added Gaussian or Laplace noise. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives Gaussian posterior and evidence distributions, in which the numerator terms can be expressed in terms of the Mahalanobis distance or "Gaussian norm" | | y − y ^ | | M − 1 2 = (y − y ^) ⊤ M − 1 (y − y ^) , where y is a vector variable, y ^ is its estimator, and M is the covariance matrix. The posterior Gaussian norm is shown to provide a robust metric for quantitative model selection for the different systems and noise models examined. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Point estimation and related classification problems for several Lindley populations with application using COVID-19 data.
- Author
-
Bal, Debasmita, Tripathy, Manas Ranjan, and Kumar, Somesh
- Subjects
- *
FIX-point estimation , *MARKOV chain Monte Carlo , *BAYES' estimation , *MAXIMUM likelihood statistics , *COVID-19 - Abstract
The problems of point estimation and classification under the assumption that the training data follow a Lindley distribution are considered. Bayes estimators are derived for the parameter of the Lindley distribution applying the Markov chain Monte Carlo (MCMC), and Tierney and Kadane's [Tierney and Kadane, Accurate approximations for posterior moments and marginal densities, J. Amer. Statist. Assoc. 81 (1986), pp. 82–86] methods. In the sequel, we prove that the Bayes estimators using Tierney and Kadane's approximation and Lindley's approximation both converge to the maximum likelihood estimator (MLE), as $ n \rightarrow \infty $ n → ∞ , where n is the sample size. The performances of all the proposed estimators are compared with some of the existing ones using bias and mean squared error (MSE), numerically. It has been noticed from our simulation study that the proposed estimators perform better than some of the existing ones. Applying these estimators, we construct several plug-in type classification rules and a rule that uses the likelihood accordance function. The performances of each of the rules are numerically evaluated using the expected probability of misclassification (EPM). Two real-life examples related to COVID-19 disease are considered for illustrative purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Inference of stress-strength reliability based on adaptive progressive type-II censing from Chen distribution with application to carbon fiber data.
- Author
-
Ahmed, Essam A. and Al-Essa, Laila A.
- Subjects
MARKOV chain Monte Carlo ,GIBBS sampling ,BAYES' estimation ,MAXIMUM likelihood statistics ,ASYMPTOTIC distribution - Abstract
In this paper, we used the maximum likelihood estimation (MLE) and the Bayes methods to perform estimation procedures for the reliability of stress-strength R = P(Y < X) based on independent adaptive progressive censored samples that were taken from the Chen distribution. An approximate confidence interval of R was constructed using a variety of classical techniques, such as the normal approximation of the MLE, the normal approximation of the log-transformed MLE, and the percentile bootstrap (Boot-p) procedure. Additionally, the asymptotic distribution theory and delta approach were used to generate the approximate confidence interval. Further, the Bayesian estimation of R was obtained based on the balanced loss function, which came in two versions here, the symmetric balanced squared error (BSE) loss function and the asymmetric balanced linear exponential (BLINEX) loss function. When estimating R using the Bayesian approach, all the unknown parameters of the Chen distribution were assumed to be independently distributed and to have informative gamma priors. Additionally, a mixture of Gibbs sampling algorithm and Metropolis-Hastings algorithm was used to compute the Bayes estimate of R and the associated highest posterior density credible interval. In the end, simulation research was used to assess the general overall performance of the proposed estimators and a real dataset was provided to exemplify the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Efficient variable selection for high-dimensional multiplicative models: a novel LPRE-based approach.
- Author
-
Chen, Yinjun, Ming, Hao, and Yang, Hu
- Subjects
GOLD sales & prices ,DATA analysis ,BAYES' estimation ,MEDICAL research - Abstract
This paper explores a novel high-dimensional sparse multiplicative model, which deal with data with positive responses, particularly in economical and biomedical researches. The proposed regularized method is conducted on the least product relative error (LPRE), and can be applied on various penalties including adaptive Lasso, SCAD, and MCP. An adjusted ADMM algorithm is adopted to obtain the estimators based on LPRE loss. Additionally, we prove the consistency and compute the convergence rates of the estimator. To validate the effectiveness of the proposed method, we conduct extensive numerical studies and real data analysis, yielding valuable insights and practical applications, utilizing well-known datasets of the Boston housing data and gold price data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. EMTT-YOLO: An Efficient Multiple Target Detection and Tracking Method for Mariculture Network Based on Deep Learning.
- Author
-
Lv, Chunfeng, Yang, Hongwei, and Zhu, Jianping
- Subjects
MULTIPLE target tracking ,BAYES' estimation ,MARICULTURE ,MULTIPLE comparisons (Statistics) ,AQUACULTURE ,DEEP learning - Abstract
Efficient multiple target tracking (MTT) is the key to achieving green, precision, and large-scale aquaculture, marine exploration, and marine farming. The traditional MTT methods based on Bayes estimation have some pending problems such as an unknown detection probability, random target newborn, complex data associations, and so on, which lead to an inefficient tracking performance. In this work, an efficient two-stage MTT method based on a YOLOv8 detector and SMC-PHD tracker, named EMTT-YOLO, is proposed to enhance the detection probability and then improve the tracking performance. Firstly, the first detection stage, the YOLOv8 model, which adopts several improved modules to improve the detection behaviors, is introduced to detect multiple targets and derive the extracted features such as the bounding box coordination, confidence, and detection probability. Secondly, the particles are built based on the previous detection results, and then the SMC-PHD filter, the second tracking stage, is proposed to track multiple targets. Thirdly, the lightweight data association Hungarian method is introduced to set up the data relevance to derive the trajectories of multiple targets. Moreover, comprehensive experiments are presented to verify the effectiveness of this two-stage tracking method of the EMTT-YOLO. Comparisons with other multiple target detection methods and tracking methods also demonstrate that the detection and tracking behaviors are improved greatly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Inference for Kumaraswamy‐G family of distributions under unified progressive hybrid censoring with partially observed competing risks data.
- Author
-
Dutta, Subhankar, Ng, Hon Keung Tony, and Kayal, Suchandan
- Subjects
- *
BAYES' estimation , *COMPETING risks , *MONTE Carlo method , *INFERENTIAL statistics , *CENSORSHIP , *ERROR functions - Abstract
In this study, statistical inference for competing risks model with latent failure times following the Kumaraswamy‐G (Kw‐G) family of distributions under a unified progressive hybrid censoring (UPHC) scheme is developed. Maximum likelihood estimates (MLEs) of the unknown model parameters are obtained, and their existence and uniqueness properties are discussed. Using the asymptotic properties of MLEs, the approximate confidence intervals for the model parameters are constructed. Further, Bayes estimates with associated highest posterior density credible intervals for the model parameters are developed under squared error loss function with informative and noninformative priors. These estimates are obtained under both restricted and nonrestricted parameter spaces. Moreover, frequentist and Bayesian approaches are developed to test the equality of shape parameters of the two competing failure causes. The comparison of censoring schemes based on different criteria is also discussed. Monte Carlo simulation studies are used to evaluate the performance of the proposed statistical inference procedures. An electrical appliances data set is analyzed to illustrate the applicability of the proposed methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Reliability estimation in multicomponent stress-strength model using weighted exponential-Lindley distribution.
- Author
-
Sharma, Sunita and Kumar, Vinod
- Subjects
- *
GIBBS sampling , *BAYES' estimation , *ERROR functions , *BAYESIAN analysis , *CONFIDENCE intervals , *RELIABILITY in engineering - Abstract
This research investigates the reliability estimation in multicomponent stress-strength model when both the stress and strengths are drawn from Weighted Exponential-Lindley distribution. Reliability assessment is carried out using classical and Bayesian approaches. The Bayes estimates of the reliability in the multicomponent stress-strength model are derived under a squared error loss function using informative and non-informative priors for the parameters. Further, Lindley's approximation and Gibbs sampling method are used to develop Bayes estimators for the system's reliability due to the lack of explicit forms. Additionally, an asymptotic confidence interval and the highest probability density credible interval are constructed to gauge system performance. A simulation study is conducted to assess the performance of reliability estimators. Finally, a real data set is analysed for illustrative purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Bayesian prior modeling in vector autoregressions via the Yule-Walker equations.
- Author
-
Spezia, Luigi
- Subjects
- *
VECTOR autoregression model , *TIME series analysis , *COVARIANCE matrices , *MARKOV chain Monte Carlo , *WHITE noise , *BAYES' estimation , *EQUATIONS - Abstract
In multivariate time series analysis, the Yule-Walker method refers to a system of equations relating the cross-covariances of a stationary vector autoregressive (VAR) model with the matrices of the autoregressive coefficients and the covariance matrix of the noise, both of which are unknown to be estimated. In Bayesian inference of VAR models, one of the key problems is the setting of the prior distributions on these unknown parameters. The Yule-Walker equations are used here to develop a novel prior specification that exploits the reparameterization of the unknowns in terms of the mean, the cross-covariances, and the covariance of the process. Further, the cross-covariance matrices are separated out in terms of the standard deviations and the correlations. All these new quantities are easier to handle because it is more common to have prior information on the mean and the correlation structure of a multiple time series rather than the underlying autoregressive coefficients and the white noise process. The proposed prior specification is suitable for both non informative and informative settings. Through the Yule-Walker based prior, parameter estimation and structure learning of the stationary VAR models are performed via Markov chain Monte Carlo methods. The methodology is illustrated via some synthetic data sets, a benchmark example, and an environmental time series. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Inference on exponentiated Rayleigh distribution with constant stress partially accelerated life tests under progressive type-II censoring.
- Author
-
Yao, Huiying and Gui, Wenhao
- Subjects
- *
ACCELERATED life testing , *CENSORING (Statistics) , *RAYLEIGH model , *MAXIMUM likelihood statistics , *STRESS concentration , *BAYES' estimation , *PARAMETER estimation - Abstract
This study aims to explore the issues of evaluating the parameters and the accelerating factor based on constant stress for partially accelerating life tests when the potential failure times have an exponentiated Rayleigh distribution. Within the framework of progressive Type-II censoring schemes, we employ the Newton-Raphson algorithm as an iterative methodology to gain the maximum likelihood estimates, accompanied by proof of the existence of these point estimators. We also construct asymptotic confidence intervals for interested parameters and acceleration factors by utilizing the asymptotical characteristics of the maximum likelihood estimators. The Bayesian estimations of unknown parameters are derived by using the independent gamma priors and dependent Gamma-Dirichlet prior on the basis of square error and relatively smooth LINEX loss functions, respectively. Furthermore, we adopt the importance sampling method to compute Bayesian point estimates and the credible intervals with the highest posterior density. To validate the effectiveness of the suggested approaches, a series of simulated experiments are carried out. Lastly, we conduct analyzes on two actual datasets to show the applicability of the suggested techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Robust estimation of loss‐based measures of model performance under covariate shift.
- Author
-
Morrison, Samantha, Gatsonis, Constantine, Dahabreh, Issa J., Li, Bing, and Steingrimsson, Jon A.
- Subjects
- *
HEALTH & Nutrition Examination Survey , *BAYES' estimation - Abstract
We present methods for estimating loss‐based measures of the performance of a prediction model in a target population that differs from the source population in which the model was developed, in settings where outcome and covariate data are available from the source population but only covariate data are available on a simple random sample from the target population. Prior work adjusting for differences between the two populations has used various weighting estimators with inverse odds or density ratio weights. Here, we develop more robust estimators for the target population risk (expected loss) that can be used with data‐adaptive (e.g., machine learning‐based) estimation of nuisance parameters. We examine the large‐sample properties of the estimators and evaluate finite‐sample performance in simulations. Last, we apply the methods to data from lung cancer screening using nationally representative data from the National Health and Nutrition Examination Survey (NHANES) and extend our methods to account for the complex survey design of the NHANES. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. BAYESIAN PARAMETER ESTIMATION OF POWER FUNCTION DISTRIBUTION UNDER DIFFERENT LOSS FUNCTIONS.
- Author
-
Ali, Ashraf, Iqbal, Muhammad Zafar, Ghamkhar, Madiha, Anjum, Adeel, and Murtaza, Hafiz Bilal
- Subjects
- *
ERROR functions , *MAXIMUM likelihood statistics , *BAYESIAN analysis , *GAMMA distributions , *INFERENTIAL statistics , *BAYES' estimation - Abstract
In statistical inference and real world, the Bayesian analysis is a modern parameter estimation technique. We have used power function distribution (PFD) and informative prior (Gamma Distribution) to find the Bayes estimators of parameter under various loss functions: The square error loss function (SELF), the quadratic error loss function (QELF), the weighted square error loss function (WSELF), the precautionary error loss function (PELF), the K loss function (KLF), the entropy error loss function (EELF), the modified linear exponential error loss function (MLINEXELF) and the non-linear exponential error loss function (NLINEXLF). Furthermore, we compared the Bayes Estimators with the classical Maximum Likelihood Estimator (MLE) to evaluate their performance in terms of loss functions. Finally, the results have been shown graphically by using R software. [ABSTRACT FROM AUTHOR]
- Published
- 2024
21. Robust Matrix Completion with Heavy-tailed Noise.
- Author
-
Wang, Bingyan and Fan, Jianqing
- Subjects
- *
LOW-rank matrices , *MATRIX decomposition , *STATISTICAL errors , *BAYES' estimation , *EUCLIDEAN algorithm - Abstract
AbstractThis paper studies noisy low-rank matrix completion in the presence of heavy-tailed and possibly asymmetric noise, where we aim to estimate an underlying low-rank matrix given a set of highly incomplete noisy entries. Though the matrix completion problem has attracted much attention in the past decade, there is still lack of theoretical understanding when the observations are contaminated by heavy-tailed noises. Prior theory falls short of explaining the empirical results and is unable to capture the optimal dependence of the estimation error on the noise level. In this paper, we adopt an adaptive Huber loss to accommodate heavy-tailed noise, which is robust against large and possibly asymmetric errors when the parameter in the Huber loss function is carefully designed to balance the Huberization biases and robustness to outliers. Then, we propose an efficient nonconvex algorithm via a balanced low-rank Burer-Monteiro matrix factorization and gradient descent with robust spectral initialization. We prove that under merely a bounded second-moment condition on the error distributions, rather than the sub-Gaussian assumption, the Euclidean errors of the iterates generated by the proposed algorithm decrease geometrically fast until achieving a minimax-optimal statistical estimation error, which has the same order as that in the sub-Gaussian case. The key technique behind this significant advancement is a powerful leave-one-out analysis framework. The theoretical results are corroborated by our numerical studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Statistical Inferences about Parameters of the Pseudo Lindley Distribution with Acceptance Sampling Plans.
- Author
-
Eissa, Fatehi Yahya, Sonar, Chhaya Dhanraj, Alamri, Osama Abdulaziz, and Tolba, Ahlam H.
- Subjects
- *
MARKOV chain Monte Carlo , *ACCEPTANCE sampling , *ERROR functions , *INFERENTIAL statistics , *ENTROPY , *BAYES' estimation - Abstract
Different non-Bayesian and Bayesian techniques were used to estimate the pseudo-Lindley (PsL) distribution's parameters in this study. To derive Bayesian estimators, one must assume appropriate priors on the parameters and use loss functions such as squared error (SE), general entropy (GE), and linear-exponential (LINEX). Since no closed-form solutions are accessible for Bayes estimates under these loss functions, the Markov Chain Monte Carlo (MCMC) approach was used. Simulation studies were conducted to evaluate the estimators' performance under the given loss functions. Furthermore, we exhibited the adaptability and practicality of the PsL distribution through real-world data applications, which is essential for evaluating the various estimation techniques. Also, the acceptance sampling plans were developed in this work for items whose lifespans approximate the PsL distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Inference on the reliability of inverse Weibull with multiply Type-I censored data.
- Author
-
Mou, Zhengcheng, Liu, Guojun, Chiang, Jyun-You, and Chen, Sihong
- Subjects
- *
FISHER information , *MAXIMUM likelihood statistics , *FIX-point estimation , *CENSORSHIP , *INFERENTIAL statistics , *CENSORING (Statistics) , *BAYES' estimation - Abstract
Inverse Weibull (IW) distribution is widely used due to its non-monotonic hazard function. For the IW distribution, existing research on statistical inference has mostly focused on censored data, but there has been no study on multiply Type-I censoring until now as far as we know. The multiple Type-I censoring, acted as an extended form of Type-I censoring, is frequently encountered in industry and medicine research, which also needs to be paid attention. Thus, this study conducts detailed analyses on the reliability of IW distribution with multiply Type-I censored data, including point estimation and confidence interval (CI) construction. Four methods, including the maximum likelihood estimation (MLE), least square estimation (LSE), and two Bayesian estimations, namely MCMC and Lindley, are adopted in point estimation. Results show that the Lindley method performs best with small mission times, while MLE is optimal for large mission times. Specifically, for the CIs construction, we proposed a pivotal quantity based on LSE to construct the CIs of reliability and compare it with two popular methods, Fisher information matrix (FIM) derivation and MCMC algorithm. Our proposed method shows competitive performance as the MCMC and outperforms FIM. Finally, those estimation methods are applied to an example for illustration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. On the Performance of Horseshoe Priors for Inducing Sparsity in Structural Equation Models.
- Author
-
Harra, Kjorte and Kaplan, David
- Subjects
- *
STRUCTURAL equation modeling , *HORSESHOES , *PATH analysis (Statistics) , *JOB performance , *BAYES' estimation , *MULTICOLLINEARITY - Abstract
The present work focuses on the performance of two types of shrinkage priors—the horseshoe prior and the recently developed regularized horseshoe prior—in the context of inducing sparsity in path analysis and growth curve models. Prior research has shown that these horseshoe priors induce sparsity by at least as much as the "gold standard" spike-and-slab prior. The horseshoe priors are compared to the ridge prior and lasso prior, as well as default non-informative priors, in terms of the percent shrinkage in the model parameters and out-of-sample predictive performance. Empirical studies using data from two large-scale educational assessments reveal the clear advantages of the horseshoe priors in terms of both shrinkage and predictive performance. Simulation studies reveal clear advantages in terms of shrinkage, but less obvious advantages in terms of predictive performance, except in the small sample size condition where both horseshoe priors provide noticeably improved predictive performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Superpopulation model inference for non-probability samples under informative sampling.
- Author
-
Liu, Zhan, Wang, Dianni, and Pan, Yingli
- Subjects
- *
NONPROBABILITY sampling , *HEALTH & Nutrition Examination Survey , *BAYES' estimation - Abstract
AbstractNon-probability samples have been extensively applied in many various fields in recent years. However, it is difficult to infer the population from non-probability samples since selection probabilities of non-probability samples is unknown. Superpopulation modeling has been recently explored to make inference from non-probability samples. However, the existing superpopulation model approaches rely mostly on the noninformative sampling assumption. When sampling is informative, the sample distribution differs from the population distribution. Ignoring this point may result in biased estimation. In this paper, taking into account the informative sampling scheme, a superpopulation model approach for non-probability samples is proposed. Exponential model, logistic model and probit model are established to explain the informative sampling mechanism, respectively. The sample likelihood is derived to estimate the superpopulation model parameters for non-probability samples under various informative sampling models. The population mean estimator can be obtained based on the fitted superpopulation model from non-probability samples under different informative sampling approaches. The theoretical properties of the proposed estimator are established. Simulation results illustrate the performance of our proposed method for different informative sampling models and sample sizes. Also, the proposed method is applied to the data from the National Health and Nutrition Examination Survey. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Record-based transmuted generalized linear exponential distribution with increasing, decreasing and bathtub shaped failure rates.
- Author
-
Arshad, Mohd, Khetan, Mukti, Kumar, Vijay, and Pathak, Ashok Kumar
- Subjects
- *
DISTRIBUTION (Probability theory) , *MONTE Carlo method , *PROBABILITY density function , *LEAST squares , *MAXIMUM likelihood statistics , *BIAS correction (Topology) , *EXPONENTIAL functions , *BAYES' estimation - Abstract
The linear exponential distribution is a generalization of the exponential and Rayleigh distributions. This distribution is one of the best models to fit data with increasing failure rate (IFR). But it does not provide a reasonable fit for modeling data with decreasing failure rate (DFR) and bathtub shaped failure rate (BTFR). To overcome this drawback, we propose a new record-based transmuted generalized linear exponential (RTGLE) distribution by using the technique of Balakrishnan and He. The family of RTGLE distributions is more flexible to fit the data sets with IFR, DFR, and BTFR, and also generalizes several well-known models as well as some new record-based transmuted models. This paper aims to study the statistical properties of RTGLE distribution, like, the shape of the probability density function and hazard function, quantile function and its applications, moments and its generating function, order and record statistics, Rényi entropy. The maximum likelihood estimators, least squares and weighted least squares estimators, Anderson-Darling estimators, Cramér-von Mises estimators of the unknown parameters are constructed and their biases and mean squared errors are reported via Monte Carlo simulation study. Finally, the real data sets illustrate the goodness of fit and applicability of the proposed distribution; hence, suitable recommendations are forwarded. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. On inference in a class of exponential distribution under imperfect maintenance.
- Author
-
Kamranfar, Hoda, Ahmadi, Kambiz, and Fouladirad, Mitra
- Subjects
- *
DISTRIBUTION (Probability theory) , *MONTE Carlo method , *BAYES' estimation , *MAXIMUM likelihood statistics , *INFERENTIAL statistics , *CONFIDENCE intervals - Abstract
This paper deals with statistical inference for lifetime data in presence of imperfect maintenance. For the maintenance model, the Sheu and Griffith model is considered. The lifetime distribution belongs to exponential distribution class. The maximum likelihood estimation procedure of the model parameters is discussed, and confidence intervals are provided using the asymptotic likelihood theory and bootstrap approach. Based on conjugate and discrete priors, Bayesian estimators of the model parameters are developed under symmetric and asymmetric loss functions. The proposed methodologies are applied to simulated data and sensitivity analysis to different parameters and data characteristics is carried out. The effect of model misspecification is also assessed within this class of distributions through a Monte Carlo simulation study. Finally, two datasets are analyzed for demonstrative aims. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Divergences based Bayesian inference with censored data.
- Author
-
Boukeloua, Mohamed
- Subjects
- *
BAYESIAN field theory , *LAW of large numbers , *BAYES' estimation , *CENSORING (Statistics) , *ASYMPTOTIC normality , *PARAMETRIC modeling - Abstract
Abstract.In this work, we deal with some Bayesian inference problems in the presence of right censored data. First, we propose a dual
ϕ −divergence Bayes type estimators for parametric models and we establish their asymptotic normality. To establish this result, we need a uniform strong law of large numbers that we prove as well. Then, we consider the problem of prior distributions construction for model selection usingϕ −divergences. Finally, we consider the problem of the predictive density estimation on the basis ofϕ −divergences. We apply an expansion result of the generalized Bayesian predictive density on two parametric models widely used in survival analysis, namely the Weibull and the inverse Weibull model, under right censoring. We also check the performances of our proposed methods through simulations and real data applications. The results of these studies show that our proposed dualϕ −divergence Bayes type estimators are more robust than other Bayesian estimators. Moreover, the generalized Bayesian predictive density performs better than the classical estimative density especially for the inverse Weibull model. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
29. Estimation procedures and optimal censoring schemes for an improved adaptive progressively type-II censored Weibull distribution.
- Author
-
Nassar, Mazen and Elshahhat, Ahmed
- Subjects
- *
WEIBULL distribution , *CENSORING (Statistics) , *ASYMPTOTIC normality , *BAYES' estimation , *FIX-point estimation , *HAZARD function (Statistics) - Abstract
This paper presents an effort to investigate the estimations of the Weibull distribution using an improved adaptive Type-II progressive censoring scheme. This scheme effectively guarantees that the experimental time will not exceed a pre-fixed time. The point and interval estimations using two classical estimation methods, namely maximum likelihood and maximum product of spacing, are considered to estimate the unknown parameters as well as the reliability and hazard rate functions. The approximate confidence intervals of these quantities are obtained based on the asymptotic normality of the maximum likelihood and maximum product of spacing methods. The Bayesian estimations are also considered using MCMC techniques based on the two classical approaches. An extensive simulation study is implemented to compare the performance of the different methods. Further, we propose the use of various optimality criteria to find the optimal sampling scheme. Finally, one real data set is applied to show how the proposed estimators and the optimality criteria work in real-life scenarios. The numerical outcomes demonstrated that the Bayesian estimates using the likelihood and product of spacing functions performed better than the classical estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Statistical inference for GQARCH‐Itô‐jumps model based on the realized range volatility.
- Author
-
Fu, Jin Yu, Lin, Jin Guan, Liu, Guangying, and Hao, Hong Xia
- Subjects
- *
INFERENTIAL statistics , *PRICES , *MARKETING models , *BAYES' estimation - Abstract
This article introduces a novel approach that unifies two types of models: one is the continuous‐time jump‐diffusion used to model high‐frequency market financial data, and the other is discrete‐time GQARCH for modeling low‐frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named GQARCH‐Itô‐Jumps model. Quasi‐likelihood functions for the low‐frequency GQARCH structure are developed for the parametric estimations. In the quasi‐likelihood functions, for high‐frequency financial data, the realized range‐based estimations are adopted as the 'observations', rather than the realized return‐based volatility estimators which entail the loss of intra‐day information of the price movements. Meanwhile, the asymptotic properties are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies and some financial data are implemented to check the finite sample performance of the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Some methods to approximate and estimate the reliability function of inverse Rayleigh distribution.
- Author
-
Abraheem, Sudad K., Fezaa Al-Obedy, Nadia J., and Mohammed, Amal A.
- Subjects
RAYLEIGH model ,BERNSTEIN polynomials ,MAXIMUM likelihood statistics ,INVERSE functions ,SIMULATION methods & models ,BAYES' estimation - Abstract
This paper presents new work using an approximate method to find the reliability function for inverse Rayleigh distribution and compares it with two statistical estimation methods. In the approximate method, the reliability function is expanded using Bernstein polynomials to find the approximate value for it. As for statistical estimation methods, the first one is the maximum likelihood estimation by finding the scale parameter estimator to estimate the reliability function. The second one is the Bayes estimator is created under the NLINEX loss function to get the reliability function with the least loss where this estimator is determined utilizing chi-squared informative prior distribution. The simulation technique is used to obtain the results of all methods and compare them depending on the integrated mean squared error (IMSE) to determine which of these methods is best. Finally, to determine theoretical results MATLAB 2015 is used. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Bayesian and Non-Bayesian Estimation for The Parameter of Inverted Topp-Leone Distribution Based on Progressive Type I Censoring.
- Author
-
Muhammed, Hiba Z. and Muhammed, Essam A.
- Subjects
BAYES' estimation ,MAXIMUM likelihood statistics ,PARAMETER estimation ,CONFIDENCE intervals ,CENSORSHIP - Abstract
In this paper, Bayesian and non-Bayesian estimations of the shape parameter of the Inverted Topp-Leone distribution are studied under a progressive Type I censoring scheme. The maximum likelihood estimator (MLE) and Bayes estimator (BE) of the unknown parameter under the squared error loss (SEL) function are obtained. Three types of confidence intervals are discussed for the unknown parameter. A simulation study is performed to compare the performances of the proposed methods, and two numerical examples have been analyzed for illustrative purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Estimating the entropy of a Rayleigh model under progressive first-failure censoring.
- Author
-
Kotb, Mohammed S. and Alomari, Huda M.
- Subjects
RAYLEIGH model ,MONTE Carlo method ,ENTROPY ,INFERENTIAL statistics ,CENSORSHIP ,BAYES' estimation ,CONFIDENCE intervals - Abstract
Based on a progressive first-failure censoring (PFFC) sample, we discuss the statistical inferences of the entropy of a Rayleigh distribution. In particular, the Maximum likelihood and the different Bayes estimates for entropy are derived and compared via a Monte Carlo simulation study. Bayes estimators are developed using both symmetric and asymmetric loss functions. Approximate confidence intervals (CIs) and credible intervals (CrIs) of the entropy of the model are also performed. Numerical examples and a real data set are given to illustrate the proposed estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Inference for the Parameters of a Zero-Inflated Poisson Predictive Model.
- Author
-
Deng, Min, Aminzadeh, Mostafa S., and So, Banghee
- Subjects
BETA distribution ,BAYES' estimation ,INSURANCE companies ,GAMMA distributions ,MAXIMUM likelihood statistics - Abstract
In the insurance sector, Zero-Inflated models are commonly used due to the unique nature of insurance data, which often contain both genuine zeros (meaning no claims made) and potential claims. Although active developments in modeling excess zero data have occurred, the use of Bayesian techniques for parameter estimation in Zero-Inflated Poisson models has not been widely explored. This research aims to introduce a new Bayesian approach for estimating the parameters of the Zero-Inflated Poisson model. The method involves employing Gamma and Beta prior distributions to derive closed formulas for Bayes estimators and predictive density. Additionally, we propose a data-driven approach for selecting hyper-parameter values that produce highly accurate Bayes estimates. Simulation studies confirm that, for small and moderate sample sizes, the Bayesian method outperforms the maximum likelihood (ML) method in terms of accuracy. To illustrate the ML and Bayesian methods proposed in the article, a real dataset is analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. The generalized order statistics arising from three populations with the lower truncated proportional hazard rate models and its application to the sensitivity to the early disease stage.
- Author
-
Nadeb, Hossein, Torabi, Hamzeh, and Zhao, Yichuan
- Subjects
- *
ORDER statistics , *PROPORTIONAL hazards models , *MONTE Carlo method , *DISEASE progression , *STATISTICAL sampling , *BAYES' estimation - Abstract
In this paper, we present some results to make inference about the parameters of lower truncated proportional hazard rate models with the same baseline distributions based on three independent generalized order statistics samples. Then, especially by considering the results of the diagnostic tests for the non-diseased, early-diseased stage and fully diseased populations, we make inference about sensitivity to the early disease stage parameter. The maximum likelihood estimator, a generalized pivotal estimator and some Bayes estimators are obtained for different structures of prior distributions. The percentile bootstrap confidence interval, a generalized pivotal confidence interval and some Bayesian credible intervals are also presented. A Monte Carlo simulation study is used to evaluate the performances of the obtained point estimators and confidence/credible intervals and two competitors. We use two real datasets to illustrate the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Maximal entropy prior for the simple step‐stress accelerated test.
- Author
-
Moala, Fernando Antonio and Chagas, Karlla Delalibera
- Subjects
- *
BAYES' estimation , *ACCELERATED life testing , *MONTE Carlo method , *MARKOV chain Monte Carlo , *MARGINAL distributions , *ENTROPY - Abstract
The step‐stress procedure is a popular accelerated test used to analyze the lifetime of highly reliable components. This paper considers a simple step‐stress accelerated test assuming a cumulative exposure model with uncensored lifetime data following a Weibull distribution. The maximum likelihood approach is often used to analyze accelerated stress test data. Another approach is to use the Bayesian inference, which is useful when there is limited data available. In this paper, the parameters of the model are estimated based on the objective Bayesian viewpoint using non‐informative priors. Our main aim is to propose the maximal data information prior (MDIP) presented by Zellner (1984) as an alternative prior to the conventional independent gamma priors for the unknown parameters, in situations where there is little or no a priori knowledge about the parameters. We also obtain the Bayes estimators based on both classes of priors, assuming three different loss functions: square error loss function (SELF), linear‐exponential loss function (LINEX), and generalized entropy loss function (GELF). The proposed MDIP prior is compared with the gamma priors via Monte Carlo simulations by examining their biases and mean square errors under the three loss functions, and coverage probability. Additionally, we employ the Markov Chain Monte Carlo (MCMC) algorithm to extract characteristics of marginal posterior distributions, such as the Bayes estimator and credible intervals. Finally, a real lifetime data is presented to illustrate the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Multiscale Bayes Adaptive Threshold Wavelet Transform Geomagnetic Basemap Denoising Taking Residual Constraints into Account.
- Author
-
Xiong, Pan, Bian, Gang, Liu, Qiang, Jin, Shaohua, and Yin, Xiaodong
- Subjects
- *
WAVELET transforms , *STATISTICS , *STANDARD deviations , *MAGNETIC anomalies , *BAYES' estimation , *ROOT-mean-squares - Abstract
To achieve high-precision geomagnetic matching navigation, a reliable geomagnetic anomaly basemap is essential. However, the accuracy of the geomagnetic anomaly basemap is often compromised by noise data that are inherent in the process of data acquisition and integration of multiple data sources. In order to address this challenge, a denoising approach utilizing an improved multiscale wavelet transform is proposed. The denoising process involves the iterative multiscale wavelet transform, which leverages the structural characteristics of the geomagnetic anomaly basemap to extract statistical information on model residuals. This information serves as the a priori knowledge for determining the Bayes estimation threshold necessary for obtaining an optimal wavelet threshold. Additionally, the entropy method is employed to integrate three commonly used evaluation indexes—the signal-to-noise ratio, root mean square (RMS), and smoothing degree. A fusion model of soft and hard threshold functions is devised to mitigate the inherent drawbacks of a single threshold function. During denoising, the Elastic Net regular term is introduced to enhance the accuracy and stability of the denoising results. To validate the proposed method, denoising experiments are conducted using simulation data from a sphere magnetic anomaly model and measured data from a Pacific Ocean sea area. The denoising performance of the proposed method is compared with Gaussian filter, mean filter, and soft and hard threshold wavelet transform algorithms. The experimental results, both for the simulated and measured data, demonstrate that the proposed method excels in denoising effectiveness; maintaining high accuracy; preserving image details while effectively removing noise; and optimizing the signal-to-noise ratio, structural similarity, root mean square error, and smoothing degree of the denoised image. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Strong asymptotic properties of kernel smoothing estimation for NA random variables with right censoring.
- Author
-
Shi, Jian-hua, Xu, Jian-sen, and Xu, Jin-feng
- Subjects
- *
RANDOM variables , *KAPLAN-Meier estimator , *CENSORSHIP , *HAZARD function (Statistics) , *BAYES' estimation - Abstract
Most studies for negatively associated (NA) random variables consider the complete-data situation, which is actually a relatively ideal condition in practice. The article relaxes this condition to the incomplete-data setting and considers kernel smoothing density and hazard function estimation in the presence of right censoring based on the Kaplan–Meier estimator. We establish the strong asymptotic properties for these two estimators to assess their asymptotic behavior and justify their practical use. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. E-Bayesian inference for xgamma distribution under progressive type II censoring with binomial removals and their applications.
- Author
-
Pathak, Anurag, Kumar, Manoj, Singh, Sanjay Kumar, Singh, Umesh, Tiwari, Manoj Kumar, and Kumar, Sandeep
- Subjects
- *
BAYES' estimation , *CHOLANGIOCARCINOMA , *MAXIMUM likelihood statistics , *CENSORING (Statistics) , *BINOMIAL distribution , *ERROR functions , *BALL bearings , *BINOMIAL theorem - Abstract
In this article, we propose E-Bayes estimators of the parameter of xgamma distribution under squared error loss function, general entropy loss function, and linear exponential loss function for progressive type II censored data with binomial removals. The proposed estimators, maximum likelihood estimator, and corresponding Bayes estimators are compared in terms of their risks based on simulated samples from xgamma distribution. The proposed methodology is illustrated on two real data sets of bile duct cancer data and the endurance of deep-groove ball bearings data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. An updated software reliability model using the shanker model and failure data.
- Author
-
Shafiq, Anum, Sindhu, Tabassum Naz, Lone, Showkat Ahmad, Abushal, Tahani A., and Hassan, Marwa K. H.
- Subjects
- *
SOFTWARE reliability , *COMPUTER software developers , *POISSON processes , *DATA modeling , *BAYES' estimation - Abstract
Software developers' goal is to develop reliable and superior software. Due to the fact that software errors frequently generate large societal or financial losses, software reliability is essential. Software reliability growth models are a widely used technique for software reliability assessment. This study examines various nonhomogeneous Poisson process models with the newly developed software reliability distribution and evaluates the unknown model parameters based on frequentist and Bayesian methods of estimation. Finally, we conduct evaluations on real datasets using a variety of evaluation criteria to compare the results of previous software reliability growth models and show how the proposed model may be applied under both approaches in a practical setting. According to this study, the innovative model's mean square error, R2, Adj−R2$Adj - {R}^2$, bias, predicted relative variation, Theil statistic, and mean error of prediction values show the lowest values under the Bayesian approach for data sets II to IV, and both approaches perform well for data set I. These implementation findings demonstrate the effectiveness of our specific approach based on our examination of failure data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. The empirical Bayes estimators of the parameter of the uniform distribution with an inverse gamma prior under Stein's loss function.
- Author
-
Sun, Ya, Zhang, Ying-Ying, and Sun, Ji
- Subjects
- *
BAYES' estimation , *GAMMA distributions , *MAXIMUM likelihood statistics , *ERROR functions , *SPECIAL functions , *GAMMA functions - Abstract
For the hierarchical uniform and inverse gamma model, we calculate the Bayes posterior estimator of the parameter of the uniform distribution under Stein's loss function which penalizes gross overestimation and gross underestimation equally and the corresponding Posterior Expected Stein's Loss (PESL). We also obtain the Bayes posterior estimator of the parameter under the squared error loss function and the corresponding PESL. Moreover, we obtain empirical Bayes estimators of the parameter of the uniform distribution by two methods. Note that the estimators of the hyperparameters of the model by the Maximum Likelihood Estimation (MLE) method are summarized in a theorem, whose proof involves the upper incomplete gamma function and a special case of the Meijer G-function. In numerical simulations, we address from four perspectives. First, we exemplify the two inequalities of the Bayes posterior estimators and the PESLs. Second, we illustrate that the moment estimators and the Maximum Likelihood Estimators (MLEs) are consistent estimators of the hyperparameters. Third, we calculate the goodness-of-fit of the model for the simulated data. Fourth, we plot the marginal densities of the model for various hyperparameters. Finally, we utilize the current prices of the 300 component stocks of Shenzhen 300 Index to illustrate our theoretical studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Approximate Bayesian estimator for the random-coefficients model.
- Author
-
Wang, Jie Jiang Lichun
- Subjects
- *
BAYES' estimation , *MAXIMUM likelihood statistics , *ANALYSIS of variance , *MARKOV chain Monte Carlo - Abstract
This article constructs an approximate Bayesian estimator for the parameter vector consisted of variance components in a random-coefficients regression (RCR) model with unbalanced data. Its superiority over the analysis of variance estimator (ANOVAE) is strictly proved in terms of the mean squared error matrix (MSEM) criterion. Compared with the usual Bayes estimator computed via the MCMC method, the proposed approximate Bayesian estimator is simple and easy to interpret and use. Also, we compare it with the maximum likelihood estimator (MLE) and the restricted maximum likelihood estimator (RMLE) of the variance components. Numerical computations show that the approximate Bayesian estimator has a good approximation performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. The evaluation of the p-value as an estimator for the null hypothesis in the exponential distribution.
- Author
-
Babadi, Masoumeh, Hormozinejad, Farshin, and Zaherzadeh, Ali
- Subjects
- *
DISTRIBUTION (Probability theory) , *NULL hypothesis , *BAYES' estimation , *CONFORMANCE testing , *DECISION theory - Abstract
This paper is concerned with investigating the adequacy of using the p-value as an estimator for the set specified by the null hypothesis in the Exponential distribution. It is shown that the p-value is an admissible estimator in the one-sided test of the location parameter. When the one-sided test of the scale parameter is considered, the p-value is found to be a generalized Bayes estimator with infinite Bayes risk. However, it is very difficult to find an estimator that dominates it. When the parameter space is restricted, the modified p-value is an admissible estimator in the one-sided test of the scale parameter and performs better than the usual p-value. Although the usual p-value is generally inadmissible in the two-sided test, it can be useful as an estimator in this type of test for the scale parameter. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Reliability inference based on inverted exponentiated Rayleigh lifetime distribution under unified hybrid censored scheme.
- Author
-
Tashkandy, Yusra A., Hasaballah, Mustafa M., Bakr, M. E., Balogun, Oluwafemi Samson, and Ramadan, Dina A.
- Subjects
- *
BAYES' estimation , *RAYLEIGH model , *MARKOV chain Monte Carlo , *CONTINUOUS distributions , *MAXIMUM likelihood statistics , *CENSORING (Statistics) - Abstract
In this study, we investigated the Inverted Exponentiated Rayleigh Distribution (IERD), a significant and efficient continuous lifetime distribution commonly applied in lifespan research. Our focus was on estimating unknown parameters for a two-parameter inverted exponentiated Rayleigh distribution using unified hybrid censored data. We considered both maximum likelihood and Bayesian estimation approaches. Specifically, we employed the Gibbs within Metropolis–Hastings samplers method to develop approximate Bayes estimators utilizing informative and non-informative priors, along with symmetric and asymmetric loss functions. In addition, we utilized Markov chain Monte Carlo (MCMC) samples to derive maximum posterior density credible intervals. Simulation experiments were conducted to assess the efficacy of the proposed methodologies, and actual data analysis was performed to validate the proposed estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. BAYES ESTIMATOR OF PARAMETERS OF BINOMIAL TYPE EXPONENTIAL CLASS SRGM USING GAMMA PRIORS.
- Author
-
Singh, Rajesh, Kale, Kailash R., and Singh, Pritee
- Subjects
- *
BAYES' estimation , *BINOMIAL theorem , *MAXIMUM likelihood statistics - Abstract
The Reliability is one of the key characteristics of software that operates flawlessly and in accordance with needs of users. The assessment of Reliability is very important but it is complicated. The one-parameter exponential class failure intensity function is used in this article to quantify the model and assess the software Reliability. The scale parameter and the number of existing total failures are the model's parameters. Using the Bayesian approach, the estimators of parameters are obtained under the assumption that gamma priors are suitable to provide prior information of the parameters. Using risk efficiencies computed under squared error loss, the performance of proposed estimators is studied with their corresponding maximum likelihood estimators. The suggested Bayes estimators are found to outperform over the equivalent maximum likelihood estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
46. ON ESTIMATION AND PREDICTION FOR THE XLINDLEY DISTRIBUTION BASED ON RECORD DATA.
- Author
-
ZANJIRAN, F. and MIRMOSTAFAEE, S. M. T. K.
- Subjects
- *
BAYES' estimation , *PROBABILITY density function , *EXPONENTIAL functions - Abstract
This paper investigates the estimation of the unknown parameter in the XLindley distribution using record values and inter-record times, both in classical and Bayesian frameworks. It also delves into Bayesian prediction of a future record value. We also study the problem of estimation and prediction for the XLindley distribution based on lower records alone. A simulation study, as well as an analysis of a real data example, are conducted for comparison and illustration. The numerical findings underline that including the inter-record times in the study may enhance the performance of the estimators and predictors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
47. A NEW BAYESIAN CONTROL CHART FOR PROCESS MEAN USING EMPIRICAL BAYES ESTIMATES.
- Author
-
DAS, SOURADEEP and MAITI, SUDHANSU S.
- Subjects
- *
BAYES' estimation , *PROCESS control charts , *PERFORMANCE evaluation - Abstract
This article develops a new control chart for the mean using empirical Bayes estimates. We assume that the quality characteristic of the proposed control chart follows a normal distribution with unknown mean and variance. Both the parameters have known prior probability distributions. In practice, the parameters of priors are unknown and are estimated using the empirical Bayes approach. For the performance assessment of the new control chart, the Average Run Length (ARL) procedure is used while the process is in control and out of control. A real-life example is also considered to evaluate the performance of the proposed control chart. [ABSTRACT FROM AUTHOR]
- Published
- 2024
48. Bayesian instrumental variable estimation in linear measurement error models.
- Author
-
Wang, Qi, Wang, Lichun, and Wang, Liqun
- Subjects
- *
ERRORS-in-variables models , *MEASUREMENT errors , *INSTRUMENTAL variables (Statistics) , *BAYES' estimation , *LENGTH measurement , *PARAMETER estimation - Abstract
In this article, we study the problem of parameter estimation for measurement error models by combining the Bayes method with the instrumental variable approach, deriving the posterior distribution of parameters under different priors with known and unknown variance parameters, respectively, and calculating the Bayes estimator (BE) of the parameters under quadratic loss. However, it is difficult to obtain an explicit expression for BE because of the complex multiple integrals involved. Therefore, we adopt the linear Bayes method, which does not specify the form of the prior and avoids these complicated integral calculations, to obtain an expression for the linear Bayes estimator (LBE) for different priors. We prove that this LBE is superior to the two‐stage least squares estimator under the mean squared error matrix criterion. Numerical simulations show that our LBE is very close to the real parameter whether the variance parameters are known or unknown, and it gradually approaches BE as the sample size increases. Our results indicate that this instrumental variable approach is valid for measurement error models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Sparse Convoluted Rank Regression in High Dimensions.
- Author
-
Zhou, Le, Wang, Boxiang, and Zou, Hui
- Subjects
- *
LEAST squares , *BAYES' estimation , *AKAIKE information criterion , *HYPERCUBES - Abstract
Wang et al. studied the high-dimensional sparse penalized rank regression and established its nice theoretical properties. Compared with the least squares, rank regression can have a substantial gain in estimation efficiency while maintaining a minimal relative efficiency of 86.4 % . However, the computation of penalized rank regression can be very challenging for high-dimensional data, due to the highly nonsmooth rank regression loss. In this work we view the rank regression loss as a nonsmooth empirical counterpart of a population level quantity, and a smooth empirical counterpart is derived by substituting a kernel density estimator for the true distribution in the expectation calculation. This view leads to the convoluted rank regression loss and consequently the sparse penalized convoluted rank regression (CRR) for high-dimensional data. We prove some interesting asymptotic properties of CRR. Under the same key assumptions for sparse rank regression, we establish the rate of convergence of the l 1 -penalized CRR for a tuning free penalization parameter and prove the strong oracle property of the folded concave penalized CRR. We further propose a high-dimensional Bayesian information criterion for selecting the penalization parameter in folded concave penalized CRR and prove its selection consistency. We derive an efficient algorithm for solving sparse convoluted rank regression that scales well with high dimensions. Numerical examples demonstrate the promising performance of the sparse convoluted rank regression over the sparse rank regression. Our theoretical and numerical results suggest that sparse convoluted rank regression enjoys the best of both sparse least squares regression and sparse rank regression. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. A data-driven approach for regional-scale fine-resolution disaster impact prediction under tropical cyclones.
- Author
-
Lin, Peihui and Wang, Naiyu
- Subjects
TROPICAL cyclones ,CONVOLUTIONAL neural networks ,BAYES' estimation ,IMAGE encryption ,PREDICTION models ,FORECASTING ,DISASTERS - Abstract
Tropical cyclones (TCs) pose a significant threat to coastal regions worldwide, demanding accurate and timely predictions of potential disaster impacts. Existing regional-scale impact prediction models, however, are largely limited by the sparsity of modeling data and incapability of fine-resolution predictions in a computationally efficient manner, thus hindering real-time identification of potential disaster hotspots. To address these limitations, we present a data-driven image-to-image TC impact prediction model based on a deep convolutional neural network (CNN) for Zhejiang Province, China, an area of approximately 105,000 km
2 consisting of 90 counties. The proposed model utilizes twelve carefully selected predictors, including hazard, environmental and vulnerability factors, which are processed into province-scale 1 km-grid image-format data. An end-to-end encoder-decoder architecture is subsequently designed to extract impact-relevant spatial features from the multi-channel input images, then to construct a spatial impact map of identical size (i.e., km2 ) and resolution (i.e.,1 km-grid). This gridded impact map is then aggregated spatially to derive county-level impact predictions, which serve as the final layer of the CNN model and are used to evaluate the model's loss function in terms of mean squared error. This design is informed by the fact that the training data on TC impact, collected from historical events, were recorded at county level. Validation and error analysis demonstrate the model's promising spatial accuracy and time efficacy. Furthermore, an illustration of the model's application with Typhoon Lekima in 2019 underscores its potential for integrating meteorological forecasts to achieve real-time impact predictions and inform emergency response actions. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.