25,754 results on '"maximum likelihood estimation"'
Search Results
2. Some inferences on a mixture of exponential and Rayleigh distributions based on fuzzy data
- Author
-
Mathai, Ashlyn Maria and Kumar, Mahesh
- Published
- 2024
- Full Text
- View/download PDF
3. Parameter estimation procedures for exponential-family random graph models on count-valued networks: A comparative simulation study
- Author
-
Huang, Peng and Butts, Carter T
- Subjects
Anthropology ,Sociology ,Human Society ,Bioengineering ,Generic health relevance ,Contrastive divergence ,Exponential-family random graph model ,Markov chain Monte Carlo ,Maximum likelihood estimation ,Pseudo likelihood ,Valued ,Weighted networks - Published
- 2024
4. Wind speed probabilistic forecast based wind turbine selection and siting for urban environment.
- Author
-
Sachar, Shivangi, Shubham, Shubham, Doerffer, Piotr, Ianakiev, Anton, and Flaszyński, Paweł
- Abstract
Wind energy being a free source of energy is becoming popular over the past decades and is being studied extensively. Integration of wind turbines is now being expanded to urban and offshore settings in contrast to the conventional wind farms in relatively open areas. The direct installation of wind turbines poses a potential risk, as it may result in financial losses in scenarios characterized by inadequate wind resource availability. Therefore, wind energy availability analysis in such urban environments is a necessity. This research paper presents an in‐depth investigation conducted to predict the exploitable wind energy at four distinct locations within Nottingham, United Kingdom. Subsequently, the most suitable location, Clifton Campus at Nottingham Trent University, is identified where a comprehensive comparative analysis of power generation from eleven different wind turbine models is performed. The findings derived from this analysis suggest that the QR6 wind turbine emerges as the optimal choice for subsequent experimental investigations to be conducted in partnership with Nottingham Trent University. Furthermore, this study explores the selection of an appropriate probability density function for assessing wind potential considering seven different distributions namely, Gamma, Weibull, Rayleigh, Log‐normal, Genextreme, Gumbel, and Normal. Ultimately, the Weibull probability distribution is selected, and various methodologies are employed to estimate its parameters, which are then ranked using statistical assessments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Properties and applications of two-tailed quasi-Lindley distribution.
- Author
-
Kumar, C. Satheesh and Jose, Rosmi
- Subjects
- *
LAPLACE distribution , *MAXIMUM likelihood statistics , *RENYI'S entropy , *ORDER statistics - Abstract
Here we consider two-parameter and three-parameter versions of the two-tailed quasi-Lindley distribution and investigate their important properties. An attempt has been made to estimate its parameters by the method of maximum likelihood, along with a brief discussion on the existence of the estimators. Further, the distribution is fitted to certain real-life data sets to illustrate the utility of the proposed models. A simulation study is carried out to assess the performance of likelihood estimators of the parameters of the distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Efficient non-parametric estimation of variable productivity Hawkes processes.
- Author
-
Phillips, Sophie and Schoenberg, Frederic
- Subjects
- *
NONPARAMETRIC estimation , *MAXIMUM likelihood statistics , *POINT processes , *LEAST squares , *FIX-point estimation , *DATA binning - Abstract
Several approaches to estimating the productivity function for a Hawkes point process with variable productivity are discussed, improved upon, and compared in terms of their root-mean-squared error and computational efficiency for various data sizes, and for binned as well as unbinned data. We find that for unbinned data, a regularized version of the analytic maximum likelihood estimator proposed by Schoenberg is the most accurate but is computationally burdensome. The unregularized version of the estimator is faster to compute but has lower accuracy, though both estimators outperform empirical or binned least squares estimators in terms of root-mean-squared error, especially when the mean productivity is 0.2 or greater. For binned data, binned least squares estimates are highly efficient both in terms of computation time and root-mean-squared error. An extension to estimating transmission time density is discussed, and an application to estimating the productivity of Covid-19 in the United States as a function of time from January 2020 to July 2022 is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A novel statistical approach to COVID-19 variability using the Weibull-Inverse Nadarajah Haghighi distribution.
- Author
-
Ahmad, Aijaz, Alsadat, Najwan, Rather, Aafaq A., Meraou, M.A., and Mohie El-Din, Marwa M.
- Abstract
Researchers have devoted decades to striving to create a plethora of distinctive distributions in order to meet specific objectives. The argument is that traditional distributions have typically been found to lack fit in real-world situations, which include pharmaceutical studies, the field of engineering, hydrology, environmental science, and a number of others. The Weibull-inverse Nadarajah Haghighi (WINH) distribution is developed by combining the Weibull and inverse Nadarajah Haghighi distributions. The proposed distribution's fundamental characteristics have been established and analyzed. Several plots of the distributional properties, notably probability density function (PDF) with corresponding cumulative distribution function (CDF) are displayed. The estimation of model parameter is performed via the MLE procedure. Simulation-based research is conducted to demonstrate the performance of proposed estimator's using some measure, like the average bias, variance, and associated mean square error (MSE). Two real datasets represent the morality due to COVID 19 in France and Canada are illustrated to see the practicality of the recommended model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Bayesian and non Bayesian inference for extended two parameters model with application in financial and production fields.
- Author
-
Alhelali, Marwan H. and Alsaedi, Basim S.O.
- Abstract
In statistical inference, introducing a probability distribution appropriate for modeling complex, skewed and symmetric datasets plays an important role. This article presents a new method, referred to as the exponential transformed approach, aimed at creating fresh probability models. This method entails transforming independent and identically distributed reduced Kies random variables. This article establishes various statistical and distributional properties of this model. Furthermore, the article employs several estimation methods to estimates the unknown parameter for the proposed model. Simulation experiments are conducted to showcase the effectiveness of the proposed estimators. Additionally, two real-world data analyses demonstrate practical applications in financial and production contexts, and it is shown that the recommended distribution has superior performance compared to other existing models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Exploring statistical and machine learning methods for modeling probability distribution parameters in downtime length analysis: a paper manufacturing machine case study.
- Author
-
Koković, Vladimir, Pavlović, Kosta, Mijanović, Andjela, Kovačević, Slavko, Mačužić, Ivan, and Božović, Vladimir
- Subjects
ARTIFICIAL neural networks ,VIBRATION (Mechanics) ,PROBABILITY density function ,DISTRIBUTION (Probability theory) ,DATA libraries - Abstract
Manufacturing companies focus on improving productivity, reducing costs, and aligning performance metrics with strategic objectives. In industries like paper manufacturing, minimizing equipment downtime is essential for maintaining high throughput. Leveraging the extensive data generated by these facilities offers opportunities for gaining competitive advantages through data-driven insights, revealing trends, patterns, and predicting future performance indicators like unplanned downtime length, which is essential in optimizing maintenance and minimizing potential losses. This paper explores statistical and machine learning techniques for modeling downtime length probability distributions and correlation with machine vibration measurements. We proposed a novel framework, employing advanced data-driven techniques like artificial neural networks (ANNs) to estimate parameters of probability distributions governing downtime lengths. Our approach specifically focuses on modeling parameters of these distribution, rather than directly modeling probability density function (PDF) values, as is common in other approaches. Experimental results indicate a significant performance boost, with the proposed method achieving up to 30% superior performance in modeling the distribution of downtime lengths compared to alternative methods. Moreover, this method facilitates unsupervised training, making it suitable for big data repositories of unlabelled data. The framework allows for potential expansion by incorporating additional input variables. In this study, machine vibration velocity measurements are selected for further investigation. The study underscores the potential of advanced data-driven techniques to enables companies to make better-informed decisions regarding their current maintenance practices and to direct improvement programs in industrial settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. High-dimensional Bayesian optimization with a combination of Kriging models.
- Author
-
Appriou, Tanguy, Rullière, Didier, and Gaudrie, David
- Abstract
Bayesian Optimization (BO) is a popular approach to solve optimization problems using as few function evaluations as possible. In particular, Efficient Global Optimization (EGO) based on Kriging surrogate models has been successfully applied to many real-world applications in low dimensions (less than 30 design parameters). However, in high dimension, building an accurate Kriging model is difficult, especially when the number of samples is limited as is the case when dealing with numerical simulators. This is due to the inner optimization of the Kriging length-scale hyperparameters which can lead to inaccurate models and impacts the performances of the optimization. In this paper, we introduce a new method for high-dimensional BO which bypasses the length-scales optimization by combining sub-models with random length-scales, and whose expression, obtained in closed-form, avoids any inner optimization. We also describe how to sample suitable length-scales for the sub-models using an entropy-based criterion, in order to avoid degenerated sub-models having either too large or too small length-scales. Finally, the variance of the combination being not directly available, we present a method to compute the prediction variance for any weighting method. We apply our combined Kriging model to high-dimensional BO for analytical test functions and for the design of an electric machine. We show that our method builds more accurate surrogate models than ordinary Kriging when the number of samples is small. This results in faster convergence for BO using the combination. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Uniformization and bounded Taylor series in Newton–Raphson method improves computational performance for a multistate transition model estimation and inference.
- Author
-
Zhu, Yuxi, Brock, Guy, and Li, Lang
- Abstract
Multistate transition models (MSTMs) are valuable tools depicting disease progression. However, due to the complexity of MSTMs, larger sample size and longer follow-up time in real-world data, the computation of statistical estimation and inference for MSTMs becomes challenging. A bounded Taylor series in Newton–Raphson procedure is proposed which leverages the uniformization technique to derive maximum likelihood estimates and corresponding covariance matrix. The proposed method, namely uniformization Taylor-bounded Newton–Raphson, is validated in three simulation studies, which demonstrate the accuracy in parameter estimation, the efficiency in computation time and robustness in terms of different situations. This method is also illustrated using a large electronic medical record data related to statin-induced side effects and discontinuation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Inference methods for the Very Flexible Weibull distribution based on progressive type-II censoring.
- Author
-
Brito, Eder S., Ferreira, Paulo H., Tomazella, Vera L. D., Martins Neto, Daniele S. B., and Ehlers, Ricardo S.
- Subjects
- *
CENSORING (Statistics) , *MAXIMUM likelihood statistics , *BAYES' estimation , *WEIBULL distribution , *DATA modeling , *MARKOV chain Monte Carlo - Abstract
In this work, we present classical and Bayesian inferential methods based on samples in the presence of progressive type-II censoring under the Very Flexible Weibull (VFW) distribution. The considered distribution is relevant because it is an alternative to traditional non-flexible distributions and also to some flexible distributions already known in the literature, keeping the low amount of two parameters. In addition, studying it in a context of progressive censoring allows attesting to its applicability in data modeling from various areas of industry and technology that can use this censoring methodology. We obtain the maximum likelihood estimators of the model parameters, as well as their asymptotic variation measures. We propose the use of Markov chain Monte Carlo methods for the computation of Bayes estimates. A simulation study is carried out to evaluate the performance of the proposed estimators under different sample sizes and progressive type-II censoring schemes. Finally, the methodology is illustrated through three real data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Classical inference for time series of count data in parameter-driven models.
- Author
-
Marciano, Francisco William P.
- Subjects
- *
MAXIMUM likelihood statistics , *TIME series analysis , *CONFIDENCE intervals , *MARKOV chain Monte Carlo , *DATA modeling - Abstract
We study estimation on parameter-driven models for time series of counts. This class of models follows the structure of a generalized linear model in which the serial dependency is included in the model by the link function through a time-dependent latent process. The likelihood function for this class of models commonly cannot be calculated explicitly and computationally intensive methods like importance sampling and Markov chain Monte Carlo are used to estimate the model parameters. Here, we propose a simple and fast estimation procedure in a wide class of models that accommodate both discrete and continuous data. The maximum likelihood methodology is used to obtain the parameter estimates for the models under study. The simplicity of the procedure allows for build bootstrap confidence intervals for the hyperparameters and latent states of parameter-driven models. We perform extensive simulation studies to verify the asymptotic behavior of the parameter estimates, as well as present application of the proposed procedure through set of real data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Reliability analysis of multiple repairable systems under imperfect repair and unobserved heterogeneity.
- Author
-
Brito, Éder S., Tomazella, Vera L. D., Ferreira, Paulo H., Louzada Neto, Francisco, and Gonzatto Junior, Oilson A.
- Subjects
- *
ASYMPTOTIC efficiencies , *MAXIMUM likelihood statistics , *RELIABILITY in engineering , *FAILURE (Psychology) , *HETEROGENEITY - Abstract
Imperfect repairs (IRs) are widely applicable in reliability engineering since most equipment is not completely replaced after failure. In this sense, it is necessary to develop methodologies that can describe failure processes and predict the reliability of systems under this type of repair. One of the challenges in this context is to establish reliability models for multiple repairable systems considering unobserved heterogeneity associated with systems failure times and their failure intensity after performing IRs. Thus, in this work, frailty models are proposed to identify unobserved heterogeneity in these failure processes. In this context, we consider the arithmetic reduction of age (ARA) and arithmetic reduction of intensity (ARI) classes of IR models, with constant repair efficiency and a power‐law process distribution to model failure times and a univariate Gamma distributed frailty by all systems failure times. Classical inferential methods are used to estimate the parameters and reliability predictors of systems under IRs. An extensive simulation study is carried out under different scenarios to investigate the suitability of the models and the asymptotic consistency and efficiency properties of the maximum likelihood estimators. Finally, we illustrate the practical relevance of the proposed models on two real data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Reliability analysis of deep space satellites launched 1991–2020: Bulk population and deployable satellite performance analysis.
- Author
-
Grile, Travis M. and Bettinger, Robert A.
- Subjects
- *
ARTIFICIAL satellites , *MAXIMUM likelihood statistics , *SYSTEM failures , *FAILURE mode & effects analysis , *ARTIFICIAL satellite launching - Abstract
Flight data for deep space satellites launched and operated between 1991 and 2020 is analyzed to generate various reliability metrics. Satellite reliability is first estimated by the Kaplan‐Meier estimator, then parameterized through the Weibull distribution. This general process is applied to a general satellite data set that included all deep space satellites launched between 1991 and 2020, as well as two data subsets. One subset focuses on deployable satellites, while the other introduces a methodology of normalizing satellite lifetimes by satellite design life. Results from the general data set prove deep space satellites suffer from infant mortality while the results from the deployable data subset show deployable deep space satellites are only reliable over short periods of time. Results from the design life normalized data set give promising results, with satellites having a relatively high chance of reaching their design life. Available information regarding specific modes of failure is also leveraged to generate a percent contribution to overall satellite failure for eight distinct failure modes. Satellite failure due to crashing, in‐space propulsion failure, and telemetry system failure are proven to drive both early in life failure and later in life failure, making them the main causes of decreased reliability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Control Charts Based on Zero to k Inflated Power Series Regression Models and Their Applications.
- Author
-
Saboori, Hadi and Doostparast, Mahdi
- Abstract
In many different fields and industries, count data are publicly accessible. Control charts are used in quality studies to track count procedures. These control charts, however, only have a limited impact on zero-inflated data that contains extra zeros. The Zero-inflated power series (ZIPS) models, particularly its crucial sub-models, the Zero-inflated Poisson (ZIP), the Zero-inflated Negative binomial (ZINB), and the Zero-inflated Logarithmic (ZIL) models, are crucial approaches to handle the count data, and some control charts based on them have been proposed. However, there are situations when inflation can happen at one or more points other than zero (for instance, at one) or at more than one point (for instance, zero, one, and two). In these situations, the family of zero to k inflated power series (ZKIPS) models must be used in the control. In this work, we use a weighted score test statistic to examine upper-sided Shewhart, exponentially weighted moving average, and exponentially weighted moving average control charts. We only conducted numerical experiments on the zero to k Poisson model, which is one of the zero to k power series models, as an example. In ZKIPS models, the exponentially weighted moving average control chart can identify positive changes in the basis distribution's characteristics. By adding random effects, this method, in particular, enables boosting the capability of detecting unnatural heterogeneity variability. For detecting small to moderate shifts, the proposed strategy is more effective than the current Shewhart chart, according to simulation findings obtained using the Monte Carlo methodology. To show the charts' usefulness, they are also applied to a real example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Order-restricted statistical inference and optimal censoring scheme for Gompertz distribution based on joint type-II progressive censored data.
- Author
-
Ren, Feiyan, Ye, Tianrui, and Gui, Wenhao
- Subjects
- *
MONTE Carlo method , *FISHER information , *DISTRIBUTION (Probability theory) , *BAYES' estimation , *INFERENTIAL statistics , *EXPECTATION-maximization algorithms - Abstract
AbstractThis paper considers the order-restricted statistical inference for two populations based on the joint type-II progressive censoring scheme. The lifetime distributions of the two populations are supposed to follow the Gompertz distribution with the same shape parameter but different scale parameters. The maximum likelihood estimates of the unknown parameters are derived by employing the Newton-Raphson algorithm and the expectation-maximization algorithm, respectively. The Fisher information matrix is then employed to construct asymptotic confidence intervals. For Bayes estimation, we assume an ordered Beta-Gamma prior for the scale parameters and a Gamma prior for the common shape parameter. Bayes estimations and the highest posterior density credible intervals for unknown parameters under two different loss functions are obtained with the importance sampling technique. To evaluate the performance of order-restricted inference, extensive Monte Carlo simulations are performed and two air-conditioning systems datasets are used to illustrate the proposed inference methods. In addition, the results are compared with the case when there is no order restriction on the parameters. Finally, the optimal censoring scheme is obtained by four optimality criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Revisiting the Briggs Ancient DNA Damage Model: A Fast Maximum Likelihood Method to Estimate Post‐Mortem Damage.
- Author
-
Zhao, Lei, Henriksen, Rasmus Amund, Ramsøe, Abigail, Nielsen, Rasmus, and Korneliussen, Thorfinn Sand
- Subjects
- *
FOSSIL DNA , *MAXIMUM likelihood statistics , *DAMAGE models , *DNA analysis , *DNA sequencing - Abstract
ABSTRACT One essential initial step in the analysis of ancient DNA is to authenticate that the DNA sequencing reads are actually from ancient DNA. This is done by assessing if the reads exhibit typical characteristics of post‐mortem damage (PMD), including cytosine deamination and nicks. We present a novel statistical method implemented in a fast multithreaded programme, ngsBriggs that enables rapid quantification of PMD by estimation of the Briggs ancient damage model parameters (Briggs parameters). Using a multinomial model with maximum likelihood fit, ngsBriggs accurately estimates the parameters of the Briggs model, quantifying the PMD signal from single and double‐stranded DNA regions. We extend the original Briggs model to capture PMD signals for contemporary sequencing platforms and show that ngsBriggs accurately estimates the Briggs parameters across a variety of contamination levels. Classification of reads into ancient or modern reads, for the purpose of decontamination, is significantly more accurate using ngsBriggs than using other methods available. Furthermore, ngsBriggs is substantially faster than other state‐of‐the‐art methods. ngsBriggs offers a practical and accurate method for researchers seeking to authenticate ancient DNA and improve the quality of their data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Detection and Estimation of Diffuse Signal Components Using the Periodogram.
- Author
-
Selva, Jesus
- Subjects
- *
MAXIMUM likelihood statistics , *CHEBYSHEV polynomials , *VECTOR data , *INTERPOLATION , *DETECTORS - Abstract
One basic limitation of using the periodogram as a frequency estimator is that any of its significant peaks may result from a diffuse (or spread) frequency component rather than a pure one. Diffuse components are common in applications such as channel estimation, in which a given periodogram peak reveals the presence of a complex multipath distribution (unresolvable propagation paths or diffuse scattering, for example). We present a method to detect the presence of a diffuse component in a given peak based on analyzing the projection of the data vector onto the span of the signature's derivatives up to a given order. Fundamentally, a diffuse component is detected if the energy in the derivatives' subspace is too high at the peak's frequency, and its spread is estimated as the ratio between this last energy and the peak's energy. The method is based on exploiting the signature's Vandermonde structure through the properties of discrete Chebyshev polynomials. We also present an efficient numerical procedure for computing the data component in the derivatives' span based on barycentric interpolation. The paper contains a numerical assessment of the proposed estimator and detector. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Analysis of Carbon Dioxide Value with Extreme Value Theory Using Generalized Extreme Value Distribution.
- Author
-
Khamrot, Pannawit, Phankhieo, Narin, Wachirawongsakorn, Piyada, Piros, Supanaree, and Deetae, Natthinee
- Abstract
This paper applies the generalized extreme value (GEV) distribution using maximum likelihood estimates to analyze extreme carbon dioxide data collected by the Provincial Energy Office of Phitsanulok from 2010 to 2023. The study aims to model return levels for carbon dioxide emissions for the periods of 5, 25, 50, and 100 years, utilizing data from various fuels--Gasohol E85, Gasohol E20, Gasohol 91, Gasohol 95, ULG95, and LPG. By fitting the GEV distribution, this research not only categorizes the behavior of emissions data under different subclasses of the GEV distribution but also confirms the suitability of the GEV model for this dataset. The findings indicate a trend of increasing return levels, suggesting rising peaks in carbon dioxide emissions over time. This model provides a valuable tool for forecasting and managing environmental risks associated with high emission levels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
21. An Analysis of Type-I Generalized Progressive Hybrid Censoring for the One Parameter Logistic-Geometry Lifetime Distribution with Applications.
- Author
-
Nagy, Magdy, Mosilhy, Mohamed Ahmed, Mansi, Ahmed Hamdi, and Abu-Moussa, Mahmoud Hamed
- Subjects
- *
RETICULUM cell sarcoma , *MARKOV chain Monte Carlo , *MAXIMUM likelihood statistics , *DEATH rate , *HAZARD function (Statistics) - Abstract
Based on Type-I generalized progressive hybrid censored samples (GPHCSs), the parameter estimate for the unit-half logistic-geometry (UHLG) distribution is investigated in this work. Using maximum likelihood estimation (MLE) and Bayesian estimation, the parameters, reliability, and hazard functions of the UHLG distribution under GPHCSs have been assessed. Likewise, the computation is carried out for the asymptotic confidence intervals (ACIs). Furthermore, two bootstrap CIs, bootstrap-p and bootstrap-t, are mentioned. For symmetric loss functions, like squared error loss (SEL), and asymmetric loss functions, such as linear exponential loss (LL) and general entropy loss (GEL), there are specific Bayesian approximations. The Metropolis–Hastings samplers methodology were used to construct the credible intervals (CRIs). In conclusion, a genuine data set measuring the mortality statistics of a group of male mice with reticulum cell sarcoma is regarded as an application of the methods given. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Bivariate Length-Biased Exponential Distribution under Progressive Type-II Censoring: Incorporating Random Removal and Applications to Industrial and Computer Science Data.
- Author
-
Fayomi, Aisha, Almetwally, Ehab M., and Qura, Maha E.
- Subjects
- *
DISTRIBUTION (Probability theory) , *COPULA functions , *CONFIDENCE intervals , *MAXIMUM likelihood statistics , *DATA science - Abstract
In this paper, we address the analysis of bivariate lifetime data from a length-biased exponential distribution observed under Type II progressive censoring with random removals, where the number of units removed at each failure time follows a binomial distribution. We derive the likelihood function for the progressive Type II censoring scheme with random removals and apply it to the bivariate length-biased exponential distribution. The parameters of the proposed model are estimated using both likelihood and Bayesian methods for point and interval estimators, including asymptotic confidence intervals and bootstrap confidence intervals. We also employ different loss functions to construct Bayesian estimators. Additionally, a simulation study is conducted to compare the performance of censoring schemes. The effectiveness of the proposed methodology is demonstrated through the analysis of two real datasets from the industrial and computer science domains, providing valuable insights for illustrative purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Parameter Estimation of Uncertain Differential Equations Driven by Threshold Ornstein–Uhlenbeck Process with Application to U.S. Treasury Rate Analysis.
- Author
-
Li, Anshui, Wang, Jiajia, and Zhou, Lianlian
- Subjects
- *
STOCHASTIC differential equations , *MAXIMUM likelihood statistics , *DIFFERENTIAL equations , *MOMENTS method (Statistics) , *EVIDENCE gaps - Abstract
Uncertain differential equations, as an alternative to stochastic differential equations, have proved to be extremely powerful across various fields, especially in finance theory. The issue of parameter estimation for uncertain differential equations is the key step in mathematical modeling and simulation, which is very difficult, especially when the corresponding terms are driven by some complicated uncertain processes. In this paper, we propose the uncertainty counterpart of the threshold Ornstein–Uhlenbeck process in probability, named the uncertain threshold Ornstein–Uhlenbeck process, filling the gaps of the corresponding research in uncertainty theory. We then explore the parameter estimation problem under different scenarios, including cases where certain parameters are known in advance while others remain unknown. Numerical examples are provided to illustrate our method proposed. We also apply the method to study the term structure of the U.S. Treasury rates over a specific period, which can be modeled by the uncertain threshold Ornstein–Uhlenbeck process mentioned in this paper. The paper concludes with brief remarks and possible future directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Diagnostic analytics for a GARCH model under skew-normal distributions.
- Author
-
Liu, Yonghui, Wang, Jing, Yao, Zhao, Liu, Conan, and Liu, Shuangzhe
- Subjects
- *
GARCH model , *MONTE Carlo method , *MAXIMUM likelihood statistics , *DATA analytics , *EXPECTATION-maximization algorithms , *CURVATURE - Abstract
In this paper, a generalized autoregressive conditional heteroskedasticity model under skew-normal distributions is studied. A maximum likelihood approach is taken and the parameters in the model are estimated based on the expectation-maximization algorithm. The statistical diagnostics is made through the local influence technique, with the normal curvature and diagnostics results established for the model under four perturbation schemes in identifying possible influential observations. A simulation study is conducted to evaluate the performance of our proposed method and a real-world application is presented as an illustrative example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Reliability evaluation for Weibull distribution with heavily Type II censored data.
- Author
-
Liu, Mengyu, Zheng, Huiling, and Yang, Jun
- Subjects
- *
MAXIMUM likelihood statistics , *LEAST squares , *WEIBULL distribution , *ESTIMATION bias , *CENSORSHIP - Abstract
The lifetime data collected from the field are usually heavily censored, in which case, getting an accurate reliability evaluation based on heavily censored data is challenging. For heavily Type‐II censored data, the parameters estimation bias of traditional methods (i.e., maximum likelihood estimation (MLE) and least squares estimation (LSE)) are still large, and Bayesian methods are hard to specify the priors in practice. Therefore, considering the existing range of shape parameter for Weibull distribution, this study proposes two novel parameter estimation methods, the three‐step MLE method and the hybrid estimation method. For the three‐step MLE method, the initial estimates of shape and scale parameters are first respectively derived using MLE, then are updated by the single parameter MLE method with the range constraint of shape parameter. For the hybrid estimation method, the shape parameter is estimated by the LSE method with the existing range constraint of shape parameter, then the scale parameter estimate can be obtained by MLE. On this basis, two numerical examples are performed to demonstrate the consistency and effectiveness of the proposed methods. Finally, a case study on turbine engines is given to verify the effectiveness and applicability of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A SCALE PARAMETERS AND MODIFIED RELIABILITY ESTIMATION FOR THE INVERSE EXPONENTIAL RAYLEIGH DISTRIBUTION.
- Author
-
AL-Sultany, Shurooq A. K.
- Subjects
- *
RAYLEIGH model , *MAXIMUM likelihood statistics , *SAMPLE size (Statistics) - Abstract
This paper present methods for estimating a scale parameters and modified reliability for the Inverse Exponential Rayleigh Distribution include Maximum Likelihood, rank set sampling and Cramér-von-Mises Estimations. In all the mentioned estimation methods, the Newton-Raphson iterative numerical method was used. Then a simulation was conducted to compare the three methods with six cases and different sample sizes. The comparisons between scale parameter estimates were based on values from Mean Square Error while it was based on values from Integrated Mean Square Error for the estimates of the modified reliability function. The results show that Cramér-von-Mises (MCV) estimators is the best among the other two methods for estimating the modified reliability function. [ABSTRACT FROM AUTHOR]
- Published
- 2024
27. Inference on process capability index Spmk for a new lifetime distribution.
- Author
-
Karakaya, Kadir
- Subjects
- *
MONTE Carlo method , *PROCESS capability , *MAXIMUM likelihood statistics , *CONTINUOUS distributions , *CONFIDENCE intervals - Abstract
In various applied disciplines, the modeling of continuous data often requires the use of flexible continuous distributions. Meeting this demand calls for the introduction of new continuous distributions that possess desirable characteristics. This paper introduces a new continuous distribution. Several estimators for estimating the unknown parameters of the new distribution are discussed and their efficiency is assessed through Monte Carlo simulations. Furthermore, the process capability index S pmk is examined when the underlying distribution is the proposed distribution. The maximum likelihood estimation of the S pmk is also studied. The asymptotic confidence interval is also constructed for S pmk . The simulation results indicate that estimators for both the unknown parameters of the new distribution and the S pmk provide reasonable results. Some practical analyses are also performed on both the new distribution and the S pmk . The results of the conducted data analysis indicate that the new distribution yields effective outcomes in modeling lifetime data in the literature. Similarly, the data analyses performed for S pmk illustrate that the new distribution can be utilized for process capability indices by quality controllers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Reliability estimation and statistical inference under joint progressively Type-II right-censored sampling for certain lifetime distributions.
- Author
-
Lin, Chien-Tai, Chen, Yen-Chou, Yeh, Tzu-Chi, and Ng, Hon Keung Tony
- Abstract
AbstractIn this article, the parameter estimation of several commonly used two-parameter lifetime distributions, including the Weibull, inverse Gaussian, and Birnbaum–Saunders distributions, based on joint progressively Type-II right-censored sample is studied. Different numerical methods and algorithms are used to compute the maximum likelihood estimates of the unknown model parameters. These methods include the Newton–Raphson method, the stochastic expectation–maximization (SEM) algorithm, and the dual annealing (DA) algorithm. These estimation methods are compared in terms of accuracy (e.g. the bias and mean squared error), computational time and effort (e.g. the required number of iterations), the ability to obtain the largest value of the likelihood, and convergence issues by means of a Monte Carlo simulation study. Recommendations are made based on the simulated results. A real data set is analyzed for illustrative purposes. These methods are implemented in Python, and the computer programs are available from the authors upon request. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Zero-Inflated Binary Classification Model with Elastic Net Regularization.
- Author
-
Xin, Hua, Lio, Yuhlong, Chen, Hsien-Ching, and Tsai, Tzong-Ru
- Subjects
- *
MACHINE learning , *MAXIMUM likelihood statistics , *EXPECTATION-maximization algorithms , *OPEN-ended questions , *DIABETES - Abstract
Zero inflation and overfitting can reduce the accuracy rate of using machine learning models for characterizing binary data sets. A zero-inflated Bernoulli (ZIBer) model can be the right model to characterize zero-inflated binary data sets. When the ZIBer model is used to characterize zero-inflated binary data sets, overcoming the overfitting problem is still an open question. To improve the overfitting problem for using the ZIBer model, the minus log-likelihood function of the ZIBer model with the elastic net regularization rule for an overfitting penalty is proposed as the loss function. An estimation procedure to minimize the loss function is developed in this study using the gradient descent method (GDM) with the momentum term as the learning rate. The proposed estimation method has two advantages. First, the proposed estimation method can be a general method that simultaneously uses L 1 - and L 2 -norm terms for penalty and includes the ridge and least absolute shrinkage and selection operator methods as special cases. Second, the momentum learning rate can accelerate the convergence of the GDM and enhance the computation efficiency of the proposed estimation procedure. The parameter selection strategy is studied, and the performance of the proposed method is evaluated using Monte Carlo simulations. A diabetes example is used as an illustration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Tensile Properties of Cattail Fibres at Various Phenological Development Stages.
- Author
-
Hossain, Mohammed Shahadat, Rahman, Mashiur, and Cicek, Nazim
- Subjects
- *
MAXIMUM likelihood statistics , *CALCIUM oxalate , *WEIBULL distribution , *INDUSTRIAL capacity , *GROWING season - Abstract
Cattails (Typha latifolia L.) are naturally occurring aquatic macrophytes with significant industrial potential because of their abundance, high-quality fibers, and high fiber yields. This study is the first attempt to investigate how phenological development and plant maturity impact the quality of cattail fibers as they relate to composite applications. It was observed that fibers from all five growth stages exhibited a Weibull shape parameter greater than 1.0, with a goodness-of-fit exceeding 0.8. These calculations were performed using both the Least Square Regression (LSR) and Maximum Likelihood Estimation (MLE) methods. Among the estimators, the MLE method provided the most conservative estimation of Weibull parameters. Based on the Weibull parameters obtained with all estimators, cattail fibers from all five growth stages appear suitable for composite applications. The consistency of shape parameters across all five growth stages can be attributed to the morphological and molecular developments of cattail fiber during the vegetative period. These developments were confirmed through the presence of calcium oxalate (CaOx) plates, elemental composition, and specific infrared peaks at 2360 cm−1 contributing to the strength, cellulose peaks at 1635 cm−1, 2920 cm−1, and 3430 cm−1. In conclusion, it was found that the mechanical properties of cattail fiber remain similar when harvested multiple times in a single growing season. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Reliability analysis of two Gompertz populations under joint progressive type-ii censoring scheme based on binomial removal.
- Author
-
Abo-Kasem, O.E., Almetwally, Ehab M., and Abu El Azm, Wael S.
- Subjects
- *
MONTE Carlo method , *CENSORING (Statistics) , *BAYES' estimation , *DISTRIBUTION (Probability theory) , *MAXIMUM likelihood statistics , *MARKOV chain Monte Carlo - Abstract
Analysis of jointly censoring schemes has received considerable attention in the last few years. In this paper, maximum likelihood and Bayes methods of estimation are used to estimate the unknown parameters of two Gompertz populations under a joint progressive Type-II censoring scheme. Bayesian estimations of the unknown parameters are obtained based on squared error loss functions under the assumption of independent gamma priors. We propose to apply the Markov Chain Monte Carlo technique to carry out a Bayes estimation procedure. The approximate, bootstrap, and credible confidence intervals for the unknown parameters are also obtained. Also, reliability and hazard rate function of the two Gompertz populations under joint progressive Type-II censoring scheme is obtained and the corresponding approximate confidence intervals. Finally, all the theoretical results obtained are assessed and compared using two real-world data sets and Monte Carlo simulation studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Concentration inequalities of MLE and robust MLE.
- Author
-
Yang, Xiaowei, Liu, Xinqiao, and Wei, Haoyu
- Subjects
- *
MAXIMUM likelihood statistics , *MACHINE learning , *STATISTICS - Abstract
The Maximum Likelihood Estimator (MLE) serves an important role in statistics and machine learning. In this article, for i.i.d. variables, we obtain constant-specified and sharp concentration inequalities and oracle inequalities for the MLE only under exponential moment conditions. Furthermore, in a robust setting, the sub-Gaussian type oracle inequalities of the log-truncated maximum likelihood estimator are derived under the second-moment condition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. The exponentiated-Weibull proportional hazard regression model with application to censored survival data.
- Author
-
Ishag, Mohamed A.S., Wanjoya, Anthony, Adem, Aggrey, Alsultan, Rehab, Alghamdi, Abdulaziz S., and Afify, Ahmed Z.
- Subjects
PROPORTIONAL hazards models ,MONTE Carlo method ,REGRESSION analysis ,CENSORING (Statistics) ,MAXIMUM likelihood statistics - Abstract
The proportional hazard regression models are widely used statistical tools for analyzing survival data and estimating the effects of covariates on survival times. It is assumed that the effects of the covariates are constant across the time. In this paper, we propose a novel extension of the proportional hazard model by incorporating an exponentiated-Weibull distribution to model the baseline line hazard function. The proposed model offers more flexibility in capturing various shapes of failure rates and accommodates both monotonic and non-monotonic hazard shapes. The performance evaluation of the proposed model and comparison with other commonly used survival models including the generalized log–logistic, Weibull, Gompertz, and exponentiated exponential PH regression models are explored using simulation results. The results demonstrate the ability of the introduced model to capture the baseline hazard shapes and to estimate the effect of covariates on the hazard function accurately. Furthermore, two real survival medical data sets are analyzed to illustrate the practical importance of the proposed model to provide accurate predictions of survival outcomes for individual patients. Finally, the survival data analysis reveal that the model is a powerful tool for analyzing complex survival data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Frequentist and Bayesian approach for the generalized logistic lifetime model with applications to air-conditioning system failure times under joint progressive censoring data.
- Author
-
Hasaballah, Mustafa M., Balogun, Oluwafemi Samson, and Bakr, M. E.
- Subjects
MARKOV chain Monte Carlo ,MONTE Carlo method ,BAYES' estimation ,MAXIMUM likelihood statistics ,INFERENTIAL statistics - Abstract
Based on joint progressive Type-II censored data, we examined the statistical inference of the generalized logistic distribution with different shape and scale parameters in this research. Wherever possible, we explored maximum likelihood estimators for unknown parameters within the scope of the joint progressive censoring scheme. Bayesian inferences for these parameters were demonstrated using a Gamma prior under the squared error loss function and the linear exponential loss function. It was important to note that obtaining Bayes estimators and the corresponding credible intervals was not straightforward; thus, we recommended using the Markov Chain Monte Carlo method to compute them. We performed real-world data analysis for demonstrative purposes and ran Monte Carlo simulations to compare the performance of all the suggested approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Context-Driven Service Deployment Using Likelihood-Based Approach for Internet of Things Scenarios.
- Author
-
Banerji, Nandan, Paul, Chayan, Debnath, Bikash, Das, Biplab, Chhabra, Gurpreet Singh, Mohanta, Bhabendu Kumar, and Awad, Ali Ismail
- Subjects
MAXIMUM likelihood statistics ,INTERNET of things ,CONSUMPTION (Economics) ,INFORMATION services ,MIDDLEWARE - Abstract
In a context-aware Internet of Things (IoT) environment, the functional contexts of devices and users will change over time depending on their service consumption. Each iteration of an IoT middleware algorithm will also encounter changes occurring in the contexts due to the joining/leaving of new/old members; this is the inherent nature of ad hoc IoT scenarios. Individual users will have notable preferences in their service consumption patterns; by leveraging these patterns, the approach presented in this article focuses on how these changes impact performance due to functional-context switching over time. This is based on the idea that consumption patterns will exhibit certain time-variant correlations. The maximum likelihood estimation (MLE) is used in the proposed approach to capture the impact of these correlations and study them in depth. The results of this study reveal how the correlation probabilities and the system performance change over time; this also aids with the construction of the boundaries of certain time-variant correlations in users' consumption patterns. In the proposed approach, the information gleaned from the MLE is used in arranging the service information within a distributed service registry based on users' service usage preferences. Practical simulations were conducted over small (100 nodes), medium (1000 nodes), and relatively larger (10,000 nodes) networks. It was found that the approach described helps to reduce service discovery time and can improve the performance in service-oriented IoT scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Repair alert model when the lifetimes are discretely distributed.
- Author
-
Atlehkhani, Mohammad and Doostparast, Mahdi
- Abstract
AbstractThis paper deals with the repair alert models. They are used for analyzing lifetime data coming from engineering devices under maintenance management. Repair alert models have been proposed and investigated for continuous component lifetimes. Existing studies are concerned with the lifetimes of items described by continuous distributions. However, discrete lifetimes are also frequently encountered in practice. Examples include operating a piece of equipment in cycles, reporting field failures that are gathered weekly, and the number of pages printed by a device completed before failure. Here, the repair alert models are developed when device lifetimes are discrete. A wide class of discrete distributions, called the
telescopic family , is considered for the component lifetimes, and the proposed repair alert model is explained in detail. Furthermore, the problem of estimating parameters is investigated and illustrated by analyzing a real data set. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
37. Quasi shrinkage estimation of a block-structured covariance matrix.
- Author
-
Markiewicz, A., Mokrzycka, M., and Mrowińska, M.
- Subjects
- *
LEAST squares , *MAXIMUM likelihood statistics , *COVARIANCE matrices , *ORTHOGRAPHIC projection - Abstract
In this paper, we study the estimation of a block covariance matrix with linearly structured off-diagonal blocks. We consider estimation based on the least squares method, which has some drawbacks. These estimates are not always well conditioned and may not even be definite. We propose a new estimation procedure providing a structured positive definite and well-conditioned estimator with good statistical properties. The least squares estimator is improved with the use of a shrinkage method and an additional algebraic approach. The resulting so-called quasi shrinkage estimator is compared with the structured maximum likelihood estimator. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. An alternative bounded distribution: regression model and applications.
- Author
-
Sağlam, Şule and Karakaya, Kadir
- Subjects
- *
PROCESS capability , *MONTE Carlo method , *LEAST squares , *MAXIMUM likelihood statistics , *REGRESSION analysis - Abstract
In this paper, a new bounded distribution is introduced and some distributional properties of the new distribution are discussed. Moreover, the new distribution is implemented in the field of engineering to the Cpc process capability index. Three unknown parameters of the distribution are estimated with several estimators, and the performances of the estimators are evaluated with a Monte Carlo simulation. A new regression model is introduced based on this new distribution as an alternative to beta and Kumaraswamy models. Furthermore, it is considered one of the first studies where regression model parameters are estimated using least squares, weighted least squares, Cramér–von Mises, and maximum product spacing estimators other than the maximum likelihood. The efficiency of the estimators for the parameters of the regression model is further assessed through a simulation. Real datasets are analyzed to demonstrate the applicability of the new distribution and regression model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A modified uncertain maximum likelihood estimation with applications in uncertain statistics.
- Author
-
Liu, Yang and Liu, Baoding
- Subjects
- *
MAXIMUM likelihood statistics , *TIME series analysis , *REGRESSION analysis , *STATISTICAL models , *STATISTICS , *DIFFERENTIAL equations - Abstract
In uncertain statistics, the uncertain maximum likelihood estimation is a method of estimating the values of unknown parameters of an uncertain statistical model that make the observed data most likely. However, the observed data obtained in practice usually contain outliers. In order to eliminate the influence of outliers when estimating unknown parameters, this article modifies the uncertain maximum likelihood estimation. Following that, the modified uncertain maximum likelihood estimation is applied to uncertain regression analysis, uncertain time series analysis, and uncertain differential equation. Finally, some real-world examples are provided to illustrate the modified uncertain maximum likelihood estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Estimation of the constant-stress model with bathtub-shaped failure rates under progressive type-I interval censoring scheme.
- Author
-
Sief, Mohamed, Liu, Xinsheng, Alsadat, Najwan, and Abd El-Raheem, Abd El-Raheem M.
- Subjects
MAXIMUM likelihood statistics ,ACCELERATED life testing ,CONFIDENCE intervals ,PARAMETER estimation ,MARKOV chain Monte Carlo ,DATA analysis - Abstract
This paper investigates constant-stress accelerated life tests interrupted by a progressive type-I interval censoring regime. We provide a model based on the Chen distribution with a constant shape parameter and a log-linear connection between the scale parameter and stress loading. Inferential methods, whether classical or Bayesian, are employed to address model parameters and reliability attributes. Classical methods involve the estimation of model parameters through maximum likelihood and midpoint techniques. Bayesian approximations are achieved via the utilization of the Metropolis–Hastings algorithm, Tierney-Kadane procedure, and importance sampling methods. Furthermore, we engage in a discourse on the estimation of confidence intervals, making references to both asymptotic confidence intervals and credible intervals. To conclude, we furnish a simulation study, a corresponding discussion, and supplement these with an analysis of real data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Measurement and Evaluation of the Development Level of Health and Wellness Tourism from the Perspective of High-Quality Development.
- Author
-
Pan, Huali, Mi, Huanhuan, Chen, Yanhua, Chen, Ziyan, and Zhou, Weizhong
- Abstract
In recent years, with the dramatic surge in the demand for health and elderly care services, the emergence of the health dividend has presented good development opportunities for health and wellness tourism. However, as a sector of the economy, health and wellness tourism still faces numerous challenges in achieving high-quality development. Therefore, this paper focuses on 31 provinces in China and constructs a multidimensional evaluation index system for the high-quality development of health and wellness tourism. The global entropy-weighted TOPSIS method and cluster analysis are used to conduct in-depth measurements, regional comparisons, and classification evaluations of the high-quality development of health and wellness tourism in each province. The research results indicate that: (1) From a quality perspective, the level of health and wellness tourism development in 11 provinces in China has exceeded the national average, while the remaining 20 provinces are below the national average. (2) From a regional perspective, the current level of high-quality development in health and wellness tourism decreases sequentially from the eastern to the central to the western regions, with significant regional differences. (3) Overall, the development in the 31 provinces can be categorized into five types: the High-Quality Benchmark Type, the High-Quality Stable Type, the High-Quality Progressive Type, the General-Quality Potential Type, and the General-Quality Lagging Type. (4) From a single-dimension analysis perspective, there are significant differences in the rankings of each province across different dimensions. Finally, this paper enriches and expands the theoretical foundation on the high-quality development of health and wellness tourism; on the other hand, it puts forward targeted countermeasures and suggestions to help promote the comprehensive enhancement of health and wellness tourism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Tobit models for count time series.
- Author
-
Weiß, Christian H. and Zhu, Fukang
- Subjects
- *
MAXIMUM likelihood statistics , *DISTRIBUTION (Probability theory) , *MOVING average process , *TIME series analysis , *PARAMETER estimation - Abstract
Several models for count time series have been developed during the last decades, often inspired by traditional autoregressive moving average (ARMA) models for real‐valued time series, including integer‐valued ARMA (INARMA) and integer‐valued generalized autoregressive conditional heteroscedasticity (INGARCH) models. Both INARMA and INGARCH models exhibit an ARMA‐like autocorrelation function (ACF). To achieve negative ACF values within the class of INGARCH models, log and softplus link functions are suggested in the literature, where the softplus approach leads to conditional linearity in good approximation. However, the softplus approach is limited to the INGARCH family for unbounded counts, that is, it can neither be used for bounded counts, nor for count processes from the INARMA family. In this paper, we present an alternative solution, named the Tobit approach, for achieving approximate linearity together with negative ACF values, which is more generally applicable than the softplus approach. A Skellam–Tobit INGARCH model for unbounded counts is studied in detail, including stationarity, approximate computation of moments, maximum likelihood and censored least absolute deviations estimation for unknown parameters and corresponding simulations. Extensions of the Tobit approach to other situations are also discussed, including underlying discrete distributions, INAR models, and bounded counts. Three real‐data examples are considered to illustrate the usefulness of the new approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Properties, estimation, and applications of the extended log-logistic distribution.
- Author
-
Kariuki, Veronica, Wanjoya, Anthony, Ngesa, Oscar, Alharthi, Amirah Saeed, Aljohani, Hassan M., and Afify, Ahmed Z.
- Subjects
- *
ESTIMATION theory , *MAXIMUM likelihood statistics , *ORDER statistics , *DATA modeling , *SIMPLICITY - Abstract
This paper presents the exponentiated alpha-power log-logistic (EAPLL) distribution, which extends the log-logistic distribution. The EAPLL distribution emphasizes its suitability for survival data modeling by providing analytical simplicity and accommodating both monotone and non-monotone failure rates. We derive some of its mathematical properties and test eight estimation methods using an extensive simulation study. To determine the best estimation approach, we rank mean estimates, mean square errors, and average absolute biases on a partial and overall ranking. Furthermore, we use the EAPLL distribution to examine three real-life survival data sets, demonstrating its superior performance over competing log-logistic distributions. This study adds vital insights to survival analysis methodology and provides a solid framework for modeling various survival data scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Different estimation techniques and data analysis for constant-partially accelerated life tests for power half-logistic distribution.
- Author
-
Alomani, Ghadah A., Hassan, Amal S., Al-Omari, Amer I., and Almetwally, Ehab M.
- Subjects
- *
ACCELERATED life testing , *CONFIDENCE intervals , *DATA analysis , *LEAST squares , *MAXIMUM likelihood statistics - Abstract
Partial accelerated life tests (PALTs) are employed when the results of accelerated life testing cannot be extended to usage circumstances. This work discusses the challenge of different estimating strategies in constant PALT with complete data. The lifetime distribution of the test item is assumed to follow the power half-logistic distribution. Several classical and Bayesian estimation techniques are presented to estimate the distribution parameters and the acceleration factor of the power half-logistic distribution. These techniques include Anderson–Darling, maximum likelihood, Cramér von-Mises, ordinary least squares, weighted least squares, maximum product of spacing and Bayesian. Additionally, the Bayesian credible intervals and approximate confidence intervals are constructed. A simulation study is provided to compare the outcomes of various estimation methods that have been provided based on mean squared error, absolute average bias, length of intervals, and coverage probabilities. This study shows that the maximum product of spacing estimation is the most effective strategy among the options in most circumstances when adopting the minimum values for MSE and average bias. In the majority of situations, Bayesian method outperforms other methods when taking into account both MSE and average bias values. When comparing approximation confidence intervals to Bayesian credible intervals, the latter have a higher coverage probability and smaller average length. Two authentic data sets are examined for illustrative purposes. Examining the two real data sets shows that the value methods are workable and applicable to certain engineering-related problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. A TWO-PARAMETER ARADHANA DISTRIBUTION WITH APPLICATIONS TO RELIABILITY ENGINEERING.
- Author
-
Shanker, Ravi, Soni, Nitesh Kumar, Shanker, Rama, Ray, Mousumi, and Prodhani, Hosenur Rahman
- Subjects
- *
DISTRIBUTION (Probability theory) , *RELIABILITY in engineering - Abstract
The search for a statistical distribution for modelling the reliability data from reliability engineering is challenging and the main cause is the stochastic nature of the data and the presence of skewness, kurtosis and over-dispersion. During recent decades several one and two-parameter statistical distributions have been proposed in statistics literature but all these distributions were unable to capture the nature of data due to the presence of skewness, kurtosis and over-dispersion in the data. In the present paper, two-parameter Aradhana distribution, which includes one parameter Aradhana distribution as a particular case, has been proposed. Using convex combination approach of deriving a new statistical distribution, a two-parameter Aradhana distribution has been proposed. Various interesting and useful statistical properties including survival function, hazard function, reverse hazard function, mean residual life function, stochastic ordering, deviation from mean and median, stress-strength reliability, Bonferroni and Lorenz curve and their indices have been discussed. The raw moments, central moments and descriptive measures based on moments of the proposed distribution have been obtained. The estimation of parameters using the maximum likelihood method has been explained. The simulation study has been presented to know the performance in terms of consistency of maximum likelihood estimators as the sample size increases and. The goodness of test of the proposed distributions has been tested using the values of Akaike Information criterion and Kolmogorov-Smirnov statistics. Finally, two examples of real lifetime datasets from reliability engineering have been presented to demonstrate its applications and the goodness of fit, and it shows a better fit over two-parameter generalized Aradhana distribution, quasi Aradhana distribution, new quasi Aradhana distribution, Power Aradhana distribution, weighted Aradhana distribution, gamma distribution and Weibull distribution. The flexibility, tractability and usefulness of the proposed distribution show that it is very much useful for modelling reliability data from reliability engineering. As this is a new distribution and it has wide applications, it will draw the attention of researchers in reliability engineering and biomedical sciences to search many more applications in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
46. A MODIFIED AILAMUJIA DISTRIBUTION: PROPERTIES AND APPLICATION.
- Author
-
John, DAVID Ikwuoche, Nkiru, OKEKE Evelyn, and Lilian, FRANKLIN
- Subjects
- *
DISTRIBUTION (Probability theory) , *GENERATING functions - Abstract
This study presents a modified one-parameter Ailamujia distribution called the Entropy Transformed Ailamujia distribution (ETAD) is introduced to handle both symmetric and asymmetric lifetime data sets. The ETAD properties like order and reliability statistics, entropy, moment and moment generating function, quantile function, and its variability measures were derived. The maximum likelihood estimation (MLE) method was used in estimating the parameter of ETAD and through simulation at different sample sizes, the MLE was found to be consistent, efficient, and unbiased for estimating the ETAD parameter. The flexibility of ETAD was shown by fitting it to six different real lifetime data sets and compared it alongside seven competing oneparameter distributions. The goodness of fit (GOF) results from Akaike information criteria, Bayesian information criteria, corrected Akaike information criteria, and Hannan-Quinn information criteria show that the ETAD was the best fit amongst all the seven competing distributions across all the six data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
47. A NOVEL HYBRID DISTRIBUTED INNOVATION EGARCH MODEL FOR INVESTIGATING THE VOLATILITY OF THE STOCK MARKET.
- Author
-
M. T., MUBARAK, O. D., ADUBISI, and U. F., ABBAS
- Subjects
- *
DISTRIBUTION (Probability theory) , *GARCH model , *MARKET volatility - Abstract
When calculating risk and making decisions, investors and financial institutions heavily rely on the modeling of asset return volatility. For the exponentiated generalized autoregressive conditional heteroscedasticity (EGARCH) model, we created a unique innovation distribution in this study called the type-II-Topp-Leone-exponentiated-Gumbel (TIITLEGU) distribution. The key mathematical characteristics of the distribution were determined, and Monte Carlo experiments were used to estimate the parameters of the novel distribution using maximum likelihood estimation (MLE) procedure. The performance of the EGARCH (1,1) model with TIITLEGU distributed innovation density in relation to other innovation densities in terms of volatility modeling is examined through applications using two Nigerian shock returns. The results of the diagnostic tests indicated that, with the exception of the EGARCH (1,1)-Johnson (SU) reparametrized (JSU) innovation density, the fitted models have been sufficiently specified. The parameters for the EGARCH (1,1) model with different innovation densities are significant at various levels. Furthermore, in out-of-sample prediction, the fitted EGARCH (1,1)-TIITLEGU innovation density performed better than the EGARCH (1,1)- existing innovation densities. As a result, it is decided that the EGARCH-TIITLEGU model is the most effective for analyzing Nigerian stock market volatility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
48. INVERTED DAGUM DISTRIBUTION: PROPERTIES AND APPLICATION TO LIFETIME DATASET.
- Author
-
OSI, ABDULHAMEED A., SABO, SHAMSUDDEEN A., and MUSA, IBRAHIM Z.
- Subjects
- *
DISTRIBUTION (Probability theory) , *HAZARD function (Statistics) - Abstract
This article presents the introduction of a novel univariate probability distribution termed the inverted Dagum distribution. Extensive analysis of the statistical properties of this distribution, including the hazard function, survival function, Renyi's entropy, quantile function, and the distribution of the order statistics, was conducted. Parameter estimation of the model was performed utilizing the maximum likelihood method, with the consistency of the estimates validated through Monte Carlo simulation. Furthermore, the applicability of the proposed distribution was demonstrated through the analysis of two real datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
49. ANALYSIS OF TWO NON-IDENTICAL UNIT SYSTEM HAVING SAFE AND UNSAFE FAILURES WITH REBOOTING AND PARAMETRIC ESTIMATION IN CLASSICAL AND BAYESIAN PARADIGMS.
- Author
-
SHARMA, POONAM and KUMAR, PAWAN
- Subjects
- *
PARAMETER estimation , *BAYESIAN analysis - Abstract
The present paper aims at the study of a two non-identical system model having safe and unsafe failures and rebooting. The focus centers on the analysis w.r.t important reliability measures and estimation of parameters in Classical and Bayesian paradigms. At first one of the units is operational whereas other one is confined to standby mode. Any unit may suffer safe or unsafe failure. A safe failure is immediately taken up for remedial action by a repairman available with the system all the time, while the case of unsafe failure cannot be dealt directly but first rebooting is performed to convert the unsafe failure to safe failure mode so as to start repair normally. A switching device is used to make the repaired and standby units operational. The lifetime of both the units and switching device are taken to be exponentially distributed random variables whereas the distribution of repair times are assumed to be general. Regenerative point technique is employed to derive assosciated measures of effectiveness. To make the study more elaborative and visually attractive, some of the derived characteristics have been studied graphically too. A simulation study has also been undertaken to exhibit the behaviour of obtained characteristics in Classical and Bayesian setup. Valuable inferences about MLE and Bayes estimates have been drawn from the tables and graphs for varying values of failure and repair parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
50. A PROBABILITY MODEL FOR SURVIVAL ANALYSIS OF CANCER PATIENTS.
- Author
-
Ray, Mousumi and Shanker, Rama
- Subjects
- *
CANCER patients , *PROBABILITY theory , *SURVIVAL analysis (Biometry) - Abstract
It has been observed by statistician that to find a suitable model for the survival analysis of cancer patients is really challenging. The main reasons for that is the highly positively skewed nature of datasets. During recent decades several statistician tried to propose one parameter, two-parameter, three-parameter, four-parameter and five-parameter probability models but due to either theoretical or applied point of view the goodness of fit provided by these distributions are not very satisfactory. In this paper a compound probability model called gamma-Sujatha distribution, which is a compound of gamma and Sujatha distribution, has been proposed for the modeling of survival times of cancer patients. dolor Many important properties of the suggested distribution including its shape, moments (negative), hazard function, reversed hazard function, quantile function have been discussed. Method of maximum likelihood has been used to estimate its parameters. A simulation study has been conducted to know the consistency of maximum likelihood estimators. Two real datasets, one relating to acute bone cancer and the other relating to head and neck cancer, has been considered to examine the applicability, suitability and flexibility of the proposed distribution. The goodness of fit of the proposed distribution shows quite satisfactory fit over other considered distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.