5,621 results on '"Point estimation"'
Search Results
2. Advancing oncological practice with innovative cancer data analytics: A new exponential-generated class of distributions.
- Author
-
Alharthi, Amirah Saeed
- Subjects
HEAD & neck cancer ,MEDICAL personnel ,CANCER-related mortality ,CANCER patients ,DATA analytics ,SURVIVAL analysis (Biometry) - Abstract
Traditional predictive modeling techniques have significantly influenced the analysis of survival times and predictive markers in oncology. However, these models often do not fully address the complexities of cancer data, leading to a noticeable gap in accurately modeling both cancer survival and mortality. The aim of this study is to show a model that offers the best fit for cancer mortality and survival data, thereby improving treatment strategies and patient prognoses. I propose the a New Exponential-Weibull (NEWE) distribution within the New Exponential-Generated (NE-G) class of distributions to enhance oncological analytics with greater accuracy and comprehensiveness. An empirical analysis of Brazilian cancer mortality data is conducted., This data set covers children, head and neck, and cervical cancer. Concurrently, survival time-to-event data for bladder and advanced lung cancers patients are assessed to evaluate the model's effectiveness. Based on standard metrics, extensive simulation experiments show the Maximum Product of Spacing Estimator as the best of seven-point estimation techniques. The NEWE distribution shows superior modeling capabilities, surpassing traditional models with lower values of Log-likelihood, Cramer-von Mises, and Anderson-Darling and higher Kolmogorov-Smirnov (KS) p-values. The study also discerns the most fitting estimators for distinct types of cancer mortality and survival data. This includes the Right Tail Anderson-Darling for child cancer deaths, the Maximum Product Spacing for head and neck cancer deaths, the Least Squares for cervical cancer deaths, the Weighted Least Squares for bladder cancer survival, and the Anderson-Darling for advanced lung cancer deaths. This shows that the NEWE distribution can be used in a number of different cancers settings. The development and implementation of the NEWE distribution marks a significant advancement in oncological modeling. By analyzing both cancer mortality and survival data with enhanced accuracy and flexibility, this novel approach surpasses traditional models, offering deeper insights into cancer progression and treatment outcomes. As a result, the NEWE distribution equips healthcare professionals with a powerful tool for improving clinical decisions, leading to better prognostic assessments and patient care in cancer treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. An Integrated Approach to the Regional Estimation of Soil Moisture.
- Author
-
Sánchez-Fernández, Luis Pastor, Flores-Carrillo, Diego Alberto, and Sánchez-Pérez, Luis Alejandro
- Subjects
SOIL moisture measurement ,SOIL moisture ,FIX-point estimation ,WATER management ,SENSOR networks - Abstract
Automatic or smart irrigation systems benefit irrigation water management. However, measurement sensor networks in automatic irrigation systems are complex, and maintenance is essential. Regional soil moisture estimation avoids the multiple measurements necessary when deploying an irrigation system. In this sense, a fuzzy estimation approach based on decision-making (FEADM) has been used to obtain soil moisture point estimates. However, FEADM requires intelligent weather adjustment based on spatial features (IWeCASF) to perform regional soil moisture estimation. The IWeCASF-FEADM integrated approach for regional soil moisture estimation is developed in this work. IWeCASF provides the inputs for FEADM. FEADM is performed R times; R is the number of checkpoints at which a point estimate is obtained. In this way, regional estimation is achieved when the set of R soil moisture point estimates is completed. Additionally, IWeCASF-FEADM considers the irrigation water records, which are not included in either method individually. This method can detect when the soil moisture is deficient in a region, allowing actions to prevent water stress. This regional estimation reduces an irrigation system's operational and maintenance complexity. This integrated approach has been tested over several years by comparing the results of regional soil moisture estimation with measurements obtained at many points in the study region. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Advancing oncological practice with innovative cancer data analytics: A new exponential-generated class of distributions
- Author
-
Amirah Saeed Alharthi
- Subjects
Generated class ,Weibull distribution ,Mathematical modeling ,Oncology ,Point estimation ,Time-to-event data ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Traditional predictive modeling techniques have significantly influenced the analysis of survival times and predictive markers in oncology. However, these models often do not fully address the complexities of cancer data, leading to a noticeable gap in accurately modeling both cancer survival and mortality. The aim of this study is to show a model that offers the best fit for cancer mortality and survival data, thereby improving treatment strategies and patient prognoses. I propose the a New Exponential-Weibull (NEWE) distribution within the New Exponential-Generated (NE-G) class of distributions to enhance oncological analytics with greater accuracy and comprehensiveness. An empirical analysis of Brazilian cancer mortality data is conducted., This data set covers children, head and neck, and cervical cancer. Concurrently, survival time-to-event data for bladder and advanced lung cancers patients are assessed to evaluate the model’s effectiveness. Based on standard metrics, extensive simulation experiments show the Maximum Product of Spacing Estimator as the best of seven-point estimation techniques. The NEWE distribution shows superior modeling capabilities, surpassing traditional models with lower values of Log-likelihood, Cramer-von Mises, and Anderson-Darling and higher Kolmogorov-Smirnov (KS) p-values. The study also discerns the most fitting estimators for distinct types of cancer mortality and survival data. This includes the Right Tail Anderson-Darling for child cancer deaths, the Maximum Product Spacing for head and neck cancer deaths, the Least Squares for cervical cancer deaths, the Weighted Least Squares for bladder cancer survival, and the Anderson-Darling for advanced lung cancer deaths. This shows that the NEWE distribution can be used in a number of different cancers settings. The development and implementation of the NEWE distribution marks a significant advancement in oncological modeling. By analyzing both cancer mortality and survival data with enhanced accuracy and flexibility, this novel approach surpasses traditional models, offering deeper insights into cancer progression and treatment outcomes. As a result, the NEWE distribution equips healthcare professionals with a powerful tool for improving clinical decisions, leading to better prognostic assessments and patient care in cancer treatment.
- Published
- 2024
- Full Text
- View/download PDF
5. Next-generation statistical methodology: Advances health science research
- Author
-
Muqrin A. Almuqrin
- Subjects
Weibull distribution ,Generated classes ,Predictive analysis ,Health science research ,Point estimation ,Monte Carlo simulation ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Accurately modeling health science data is crucial for advancing medical research and improving patient outcomes. Traditional statistical analysis methods face significant challenges due to the complexity and diversity of health sciences data. This article introduces a groundbreaking statistical framework designed to overcome these challenges by developing a next-generation family of distributions, with a special focus on the versatility of the Weibull distribution. The data used in this study has been thoroughly authenticated to ensure reliability and validity. Comprehensive Monte Carlo simulations revealed that the Maximum Product of Spacing Estimator is the most effective among seven-point estimation methods, according to standard metrics. Additionally, the study identifies optimal methods for analyzing various types of lifetime data, including the Maximum Product of Spacing Estimator for pharmaceutical efficacy (ED50), the Least Squares Estimator for psychiatric treatment durations, the Cramer-von Mises Estimator for data on 43 leukemia patients and for survival periods of 20 leukemia patients, and the Right Tail Anderson-Darling Estimator for remission times of 128 bladder cancer patients. The adaptability and flexibility of the next-generation Weibull distribution set it apart as the best match among its contemporaries.
- Published
- 2024
- Full Text
- View/download PDF
6. Point estimation of the 100 p percent lethal dose using a novel penalised likelihood approach.
- Author
-
Ma, Yilei, Su, Youpeng, Wang, Peng, and Yin, Ping
- Subjects
- *
STANDARD deviations , *MAXIMUM likelihood statistics , *FIX-point estimation , *REGRESSION analysis , *LOGISTIC regression analysis - Abstract
Estimation of the 100 p percent lethal dose (LD 100 p ) is of great interest to pharmacologists for assessing the toxicity of certain compounds. However, most existing literature focuses on the interval estimation of LD 100 p and little attention has been paid to its point estimation. Currently, the most commonly used method for estimating the LD 100 p is the maximum likelihood estimator (MLE), which can be represented as a ratio estimator, with the denominator being the slope estimated from the logistic regression model. However, the MLE can be seriously biased when the sample size is small, a common nature in such studies, or when the dose–response curve is relatively flat (i.e. the slope approaches zero). In this study, we address these issues by developing a novel penalised maximum likelihood estimator (PMLE) that can prevent the denominator of the ratio from being close to zero. Similar to the MLE, the PMLE is computationally simple and thus can be conveniently used in practice. Moreover, with a suitable penalty parameter, we show that the PMLE can (a) reduce the bias to the second order with respect to the sample size and (b) avoid extreme estimates. Through simulation studies and real data applications, we show that the PMLE generally outperforms the existing methods in terms of bias and root mean square error. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Representation of negative numbers: point estimation tasks using multi-reference sonification mappings.
- Author
-
Putra, Zico Pratama and Setiawan, Deni
- Subjects
FIX-point estimation ,CARTESIAN coordinates ,HUMAN-computer interaction ,ENERGY consumption ,AUDITORY perception - Abstract
In this study, we examine different approaches to the presentation of Y coordinates in mobile auditory graphs, including the representation of negative numbers. These studies involved both normally sighted and visually impaired users, as there are applications where normally sighted users might employ auditory graphs, such as the unseen monitoring of stocks, or fuel consumption in a car. Multi-reference sonification schemes are investigated as a means of improving the performance of mobile non-visual point estimation tasks. The results demonstrated that both populations are able to carry out point estimation tasks with a good level of performance when presented with auditory graphs using multiple reference tones. Additionally, visually impaired participants performed better on graphs represented in this format than normally sighted participants. This work also implements the component representation approach for negative numbers to represent the mapping by using the same positive mapping reference for the digit and adding a sign before the digit which leads to a better accuracy of the polarity sign. This work contributes to the areas of the design process of mobile auditory devices in human-computer interaction and proposed a methodological framework related to improving auditory graph performance in graph reproduction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Generalized fiducial inference for the generalized logistic distribution: Censored and uncensored cases.
- Author
-
Li, Menghan, Yan, Liang, Li, Meng, and Wang, Qi
- Abstract
Abstract.This article primarily considers the statistical inference of the scale parameter, shape parameter, and reliability of the generalized logistic distribution in both censored and uncensored cases. For the progressively Type-II censored and complete samples, the frequentist method is utilized to construct asymptotic confidence intervals, and the Bayesian inference method is employed for constructing the point estimators as well as posterior credible intervals. Then the generalized fiducial method is applied to construct the fiducial point estimators and the fiducial confidence intervals. Furthermore, the non parametric generalized fiducial method is introduced to estimate the survival function. Simulation results demonstrate that the generalized fiducial method consistently outperforms other methods in terms of mean square error, average length, and empirical coverage. Finally, two real datasets are used to illustrate the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Point estimation, confidence intervals, and P‐values for optimal adaptive two‐stage designs with normal endpoints.
- Author
-
Meis, Jan, Pilz, Maximilian, Bokelmann, Björn, Herrmann, Carolin, Rauch, Geraldine, and Kieser, Meinhard
- Subjects
- *
FIX-point estimation , *CONFIDENCE intervals , *SAMPLING (Process) - Abstract
Due to the dependency structure in the sampling process, adaptive trial designs create challenges in point and interval estimation and in the calculation of P‐values. Optimal adaptive designs, which are designs where the parameters governing the adaptivity are chosen to maximize some performance criterion, suffer from the same problem. Various analysis methods which are able to handle this dependency structure have already been developed. In this work, we aim to give a comprehensive summary of these methods and show how they can be applied to the class of designs with planned adaptivity, of which optimal adaptive designs are an important member. The defining feature of these kinds of designs is that the adaptive elements are completely prespecified. This allows for explicit descriptions of the calculations involved, which makes it possible to evaluate different methods in a fast and accurate manner. We will explain how to do so, and present an extensive comparison of the performance characteristics of various estimators between an optimal adaptive design and its group‐sequential counterpart. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Representation of negative numbers: point estimation tasks using multi-reference sonification mappings
- Author
-
Zico Pratama Putra and Deni Setiawan
- Subjects
Sonification ,Point estimation ,Auditory graphs ,Non-visual interaction ,Negative numbers ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In this study, we examine different approaches to the presentation of Y coordinates in mobile auditory graphs, including the representation of negative numbers. These studies involved both normally sighted and visually impaired users, as there are applications where normally sighted users might employ auditory graphs, such as the unseen monitoring of stocks, or fuel consumption in a car. Multi-reference sonification schemes are investigated as a means of improving the performance of mobile non-visual point estimation tasks. The results demonstrated that both populations are able to carry out point estimation tasks with a good level of performance when presented with auditory graphs using multiple reference tones. Additionally, visually impaired participants performed better on graphs represented in this format than normally sighted participants. This work also implements the component representation approach for negative numbers to represent the mapping by using the same positive mapping reference for the digit and adding a sign before the digit which leads to a better accuracy of the polarity sign. This work contributes to the areas of the design process of mobile auditory devices in human-computer interaction and proposed a methodological framework related to improving auditory graph performance in graph reproduction.
- Published
- 2024
- Full Text
- View/download PDF
11. Crowd Counting and Individual Localization Using Pseudo Square Label
- Author
-
Jihye Ryu and Kwangho Song
- Subjects
Crowd counting ,crowd localization ,anchor-free object detection ,point estimation ,video surveillance ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Recent work in crowd counting focuses on counting over detected individuals rather than estimating the number of people in the image. However, existing crowd localization methods directly detect the head point or region of individuals, which may entail non-responsibility of the outputs that fall outside the grid. Our proposed Pseudo Square Label Network (PSL-Net) presents a novel method for crowd counting and localization, which takes advantage of the anchor-free detection in which PSL-Net predicts the probability of the center point that fall into the responsible grid, while indirectly detecting an individual outside of the responsible grid through box regression and centerness estimation. This study proposes to supervise with pseudo square label(PSL), which is generated around point annotation with fixed size. Furthermore, we design a partial many-to-one matching algorithm to assign precise labels by only matching within PSL during the training phase, and associate the predicted points with their responsible grids through centerness during the inference phase. As a result, not only PSL-Net achieves state-of-the-art on ShanghaiTech Part A and B, which are the most popular datasets in crowd counting, but also achieves state-of-the-art among the point detection-based methods in crowd localization.
- Published
- 2024
- Full Text
- View/download PDF
12. An Integrated Approach to the Regional Estimation of Soil Moisture
- Author
-
Luis Pastor Sánchez-Fernández, Diego Alberto Flores-Carrillo, and Luis Alejandro Sánchez-Pérez
- Subjects
soil moisture regional estimate ,point estimation ,weather condition adjustment ,region spatial features ,Science - Abstract
Automatic or smart irrigation systems benefit irrigation water management. However, measurement sensor networks in automatic irrigation systems are complex, and maintenance is essential. Regional soil moisture estimation avoids the multiple measurements necessary when deploying an irrigation system. In this sense, a fuzzy estimation approach based on decision-making (FEADM) has been used to obtain soil moisture point estimates. However, FEADM requires intelligent weather adjustment based on spatial features (IWeCASF) to perform regional soil moisture estimation. The IWeCASF-FEADM integrated approach for regional soil moisture estimation is developed in this work. IWeCASF provides the inputs for FEADM. FEADM is performed R times; R is the number of checkpoints at which a point estimate is obtained. In this way, regional estimation is achieved when the set of R soil moisture point estimates is completed. Additionally, IWeCASF-FEADM considers the irrigation water records, which are not included in either method individually. This method can detect when the soil moisture is deficient in a region, allowing actions to prevent water stress. This regional estimation reduces an irrigation system’s operational and maintenance complexity. This integrated approach has been tested over several years by comparing the results of regional soil moisture estimation with measurements obtained at many points in the study region.
- Published
- 2024
- Full Text
- View/download PDF
13. Asymptotically honest fiducial generalized inference: an application in autoregressive models.
- Author
-
Guo, Dan, Yan, Liang, and Li, Menghan
- Subjects
- *
SUNSPOTS - Abstract
This paper firstly studies the coefficients estimation of the AR model with normal innovation by proposing an asymptotically honest generalized fiducial (AHGF) method. Furthermore, the AHGF method is introduced to skew-normal setting. Simulation results show that the AHGF method shows more advantages than traditional methods. Specifically, the AHGF method often has a smaller mean square error for point estimation. And for interval estimation, the AHGF method behaves closer to the nominal level than other methods while maintaining comparable or shorter lengths. Finally, a temperature dataset and a sunspot series are applied to illustrate the proposed AHGF methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Estimation of parameters and quantiles of the Weibull distribution.
- Author
-
Jokiel-Rokita, Alicja and Pia̧tek, Sylwester
- Subjects
WEIBULL distribution ,MAXIMUM likelihood statistics ,QUANTILES ,QUANTILE regression ,GINI coefficient ,NONPARAMETRIC estimation ,MONTE Carlo method - Abstract
We propose three new estimators of the Weibull distribution parameters which lead to three new plug-in estimators of quantiles. One of them is a modification of the maximum likelihood estimator and two of them are based on nonparametric estimators of the Gini coefficient. We also make some review of estimators of the Weibull distribution parameters and quantiles. We compare the small sample performance (in terms of bias and mean squared error) of the known and new estimators and extreme quantiles. Based on simulations, we obtain, among others, that the proposed modification of the maximum likelihood estimator of the shape parameter has a smaller bias and mean squared error than the maximum likelihood estimator, and is better or as good as known estimators when the sample size is not very small. Moreover, one of the proposed estimator, based on the nonparametric estimator of the Gini coefficient, leads to good extreme quantiles estimates (better than the maximum likelihood estimator) in the case of small sample sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Likelihood-Free Parameter Estimation with Neural Bayes Estimators.
- Author
-
Sainsbury-Dale, Matthew, Zammit-Mangion, Andrew, and Huser, Raphaël
- Subjects
BAYES' estimation ,PARAMETER estimation ,FIX-point estimation ,MAXIMUM likelihood statistics ,CONFIDENCE intervals ,STATISTICIANS - Abstract
Neural Bayes estimators are neural networks that approximate Bayes estimators. They are fast, likelihood-free, and amenable to rapid bootstrap-based uncertainty quantification. In this article, we aim to increase the awareness of statisticians to this relatively new inferential tool, and to facilitate its adoption by providing user-friendly open-source software. We also give attention to the ubiquitous problem of estimating parameters from replicated data, which we address using permutation-invariant neural networks. Through extensive simulation studies we demonstrate that neural Bayes estimators can be used to quickly estimate parameters in weakly identified and highly parameterized models with relative ease. We illustrate their applicability through an analysis of extreme sea-surface temperature in the Red Sea where, after training, we obtain parameter estimates and bootstrap-based confidence intervals from hundreds of spatial fields in a fraction of a second. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Statistical Inference
- Author
-
Emmert-Streib, Frank, Moutari, Salissou, Dehmer, Matthias, Emmert-Streib, Frank, Moutari, Salissou, and Dehmer, Matthias
- Published
- 2023
- Full Text
- View/download PDF
17. On the Asymptotic Behavior of the Leading Eigenvector of Tyler’s Shape Estimator Under Weak Identifiability
- Author
-
Paindaveine, Davy, Verdebout, Thomas, Yi, Mengxi, editor, and Nordhausen, Klaus, editor
- Published
- 2023
- Full Text
- View/download PDF
18. Properties of complex-valued power means of random variables and their applications.
- Author
-
Akaoka, Y., Okamura, K., and Otobe, Y.
- Subjects
- *
RANDOM variables , *ARITHMETIC mean , *REAL numbers , *INTEGRAL inequalities , *FIX-point estimation , *LIMIT theorems - Abstract
We consider power means of independent and identically distributed (i.i.d.) non-integrable random variables. The power mean is an example of a homogeneous quasi-arithmetic mean. Under certain conditions, several limit theorems hold for the power mean, similar to the case of the arithmetic mean of i.i.d. integrable random variables. Our feature is that the generators of the power means are allowed to be complex-valued, which enables us to consider the power mean of random variables supported on the whole set of real numbers. We establish integrabilities of the power mean of i.i.d. non-integrable random variables and a limit theorem for the variances of the power mean. We also consider the behavior of the power mean as the parameter of the power varies. The complex-valued power means are unbiased, strongly-consistent, robust estimators for the joint of the location and scale parameters of the Cauchy distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Modeling Climate data using the Quartic Transmuted Weibull Distribution and Different Estimation Methods.
- Author
-
Moloy, Deluar J., Ali, M. A., and Alam, Farouq Mohammad A.
- Subjects
- *
WEIBULL distribution , *MONTE Carlo method , *ATMOSPHERIC models , *DISTRIBUTION (Probability theory) , *LEAST squares , *HAZARD function (Statistics) , *GOODNESS-of-fit tests - Abstract
Researchers from various fields of science encounter phenomena of interest, and they seek to model the occurrences scientifically. An important approach of performing modeling is to use probability distributions. Probability distributions are probabilistic models that have many applications in different research areas, including, but not limited to, environmental and financial studies. In this paper, we study a quartic transmuted Weibull distribution from a general quartic transmutation family of distributions as a generalization and an alternative to the well-known Weibull distribution. We also investigate the practical application of this generalization by modeling climate-related data sets and check the goodness-of-fit of the proposed model. The statistical properties of the proposed model, which includes non-central moments, generating functions, survival function, and hazard function, are derived. Different estimation methods to estimate the parameters of the proposed quartic transmuted distribution: the maximum likelihood estimation method, the maximum product of spacings method, two least-squares-based methods, and three goodnessof-fit-based estimation methods. Numerical illustration and an extensive comparative Monte Carlo simulation study is conducted to investigate the performance of the estimators of the considered inferential methods. Regarding estimation methods, simulation outcomes indicated that the maximum likelihood estimation (MLE), Anderson-Darling estimation (ADE) and right Anderson-Darling (RADE) methods in general outperformed the other considered methods in terms of estimation efficiency for large sample size, while all considered estimation methods performed almost same in terms of goodness-of-fit regardless the values of shape and transmuted parameters. Two real-life data sets are used to demonstrate the suggested estimation methods, the applicability and flexibility of the proposed distribution compared to Weibull, transmuted Weibull, and cubic transmuted Weibull distributions. Weighted least squares estimation (WLSE) and least squares estimation (LSE) methods provided best model fitting estimates of the proposed distribution for Wheaton River and rainfall data respectively. The proposed quartic transmuted Weibull distribution provided significantly improved fit for the two datasets as compared with competitive distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. An inferential analysis for the Weibull-G family of distributions under progressively censored data.
- Author
-
Shukla, Ashish Kumar, Soni, Sakshi, and Kumar, Kapil
- Abstract
In this article, the classical and the Bayesian estimators of the power of parameter and two reliability characteristics, say R (t) = P (X > t) and stress-strength reliability P = P (X > Y) from the Weibull-G family of distributions are obtained using progressively Type-II censored data. The exact confidence intervals for the unknown parameter and both the reliability measures are also constructed under the same censoring, and the statistical testing procedures are developed for the parameter and P . Afterward, we obtain the Bayes prediction intervals for future observations in a two-sample situation. We examine the behavior of these estimators under different censoring schemes using the Monte Carlo simulation technique. These estimators and prediction intervals are compared thoroughly, and comments are made based on their numerical values. Finally, we analyze two examples each having two real-life data sets for illustration purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Estimating the suspected larger of two normal means
- Author
-
Drew, Courtney and Marchand, Éric
- Published
- 2024
- Full Text
- View/download PDF
22. Two-stage and modified two-stage estimation in threshold first-order autoregressive process
- Author
-
soudabe Sajjadipanah, Sayyed Mahmoud Mirjalili, and AhmadReza Zanboori
- Subjects
two-stage procedure ,modified two-stage procedure ,threshold autoregressive process ,point estimation ,monte carlo simulation ,Mathematics ,QA1-939 - Abstract
In this paper, we discuss the two-stage and the modified two-stage procedures for the estimation of the threshold autoregressive parameter in a first-order threshold autoregressive model (${\rm TAR(1)}$). This is motivated by the problem of finding a final sample size when the sample size is unknown in advance. For this purpose, a two-stage stopping variable and a class of modified two-stage stopping variables are proposed. Afterward, we {prove} the significant properties of the procedures, including asymptotic efficiency and asymptotic risk efficiency for the point estimation based on least-squares estimators. To illustrate this theory, comprehensive Monte Carlo simulation studies is conducted to observe the significant properties of the procedures. Furthermore, the performance of procedures based on Yule-Walker estimators is investigated and the results are compared in practice that confirm our theoretical results. Finally, real-time-series data is studied to demonstrate the application of the procedures.
- Published
- 2023
- Full Text
- View/download PDF
23. Estimación del modelo de ruido de una imagen de energía local utilizando la distribución Weibull.
- Author
-
HERNÁNDEZ CIFUENTES, TATIANA, MARTÍNEZ AROCA, YORIADYS, JACANAMEJOY JAMIOY, CARLOS ANTONIO, and FORERO VARGAS, MANUEL GUILLERMO
- Subjects
- *
IMAGE processing , *WEIBULL distribution , *FIX-point estimation , *POINT processes , *NOISE - Abstract
Phase congruency is a relatively unknown and powerful image processing technique for segmentation, which has been used in image processing. However, a limitation of this technique is its sensitivity to noise. Therefore, to prevent that noise affects segmentation results, it is necessary a good estimation of its level, considering that in phase congruency, this estimation is based on the local energy image. Consequently, to improve the results of this technique, it is essential to perform a good detection of the noise threshold. In this work, we introduce an efficient method to estimate parameters of a Weibull distribution which is used to modelate the noise of energy image in phase congruency. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Generalized Fiducial Inference for the Stress–Strength Reliability of Generalized Logistic Distribution.
- Author
-
Li, Menghan, Yan, Liang, Qiao, Yaru, Cai, Xia, and Said, Khamis K.
- Subjects
- *
INFERENTIAL statistics , *FIX-point estimation , *ACCELERATED life testing - Abstract
Generalized logistic distribution, as the generalized form of the symmetric logistic distribution, plays an important role in reliability analysis. This article focuses on the statistical inference for the stress–strength parameter R = P (Y < X) of the generalized logistic distribution with the same and different scale parameters. Firstly, we use the frequentist method to construct asymptotic confidence intervals, and adopt the generalized inference method for constructing the generalized point estimators as well as the generalized confidence intervals. Then the generalized fiducial method is applied to construct the fiducial point estimators and the fiducial confidence intervals. Simulation results demonstrate that the generalized fiducial method outperforms other methods in terms of the mean square error, average length, and empirical coverage. Finally, three real datasets are used to illustrate the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. TWO-STAGE AND MODIFIED TWO-STAGE ESTIMATION IN THRESHOLD FIRST-ORDER AUTOREGRESSIVE PROCESS.
- Author
-
SAJJADIPANAH, S., MIRJALILI, S. M., and ZANBOORI, A.
- Subjects
ESTIMATION theory ,AUTOREGRESSIVE models ,SAMPLE size (Statistics) ,MATHEMATICAL variables ,LEAST squares - Abstract
In this paper, we discuss the two-stage and the modified twostage procedures for the estimation of the threshold autoregressive parameter in a first-order threshold autoregressive model (TAR(1)). This is motivated by the problem of finding a final sample size when the sample size is unknown in advance. For this purpose, a two-stage stopping variable and a class of modified two-stage stopping variables are proposed. Afterward, we prove the significant properties of the procedures, including asymptotic efficiency and asymptotic risk efficiency for the point estimation based on least-squares estimators. To illustrate this theory, comprehensive Monte Carlo simulation studies is conducted to observe the significant properties of the procedures. Furthermore, the performance of procedures based on Yule-Walker estimators is investigated and the results are compared in practice that confirm our theoretical results. Finally, real-time-series data is studied to demonstrate the application of the procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Point estimation for adaptive trial designs II: Practical considerations and guidance.
- Author
-
Robertson, David S., Choodari‐Oskooei, Babak, Dimairo, Munya, Flight, Laura, Pallmann, Philip, and Jaki, Thomas
- Subjects
- *
FIX-point estimation , *ESTIMATION bias , *TREATMENT effectiveness , *STATISTICS , *CLINICAL trials - Abstract
In adaptive clinical trials, the conventional end‐of‐trial point estimate of a treatment effect is prone to bias, that is, a systematic tendency to deviate from its true value. As stated in recent FDA guidance on adaptive designs, it is desirable to report estimates of treatment effects that reduce or remove this bias. However, it may be unclear which of the available estimators are preferable, and their use remains rare in practice. This article is the second in a two‐part series that studies the issue of bias in point estimation for adaptive trials. Part I provided a methodological review of approaches to remove or reduce the potential bias in point estimation for adaptive designs. In part II, we discuss how bias can affect standard estimators and assess the negative impact this can have. We review current practice for reporting point estimates and illustrate the computation of different estimators using a real adaptive trial example (including code), which we use as a basis for a simulation study. We show that while on average the values of these estimators can be similar, for a particular trial realization they can give noticeably different values for the estimated treatment effect. Finally, we propose guidelines for researchers around the choice of estimators and the reporting of estimates following an adaptive design. The issue of bias should be considered throughout the whole lifecycle of an adaptive design, with the estimation strategy prespecified in the statistical analysis plan. When available, unbiased or bias‐reduced estimates are to be preferred. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Approximate Bayesian computation using asymptotically normal point estimates.
- Author
-
Karabatsos, George
- Subjects
- *
ASYMPTOTIC distribution , *GAUSSIAN distribution , *ASYMPTOTIC normality , *BAYESIAN analysis , *STATISTICAL bootstrapping , *STOCHASTIC models , *RESAMPLING (Statistics) , *ESTIMATES - Abstract
Approximate Bayesian computation (ABC) provides inference of the posterior distribution, even for models with intractable likelihoods, by replacing the exact (intractable) model likelihood by a tractable approximate likelihood. Meanwhile, historically, the development of point-estimation methods usually precedes the development of posterior estimation methods. We propose and study new ABC methods based on asymptotically normal and consistent point-estimators of the model parameters. Specifically, for the classical ABC method, we propose and study two alternative bootstrap methods for estimating the tolerance tuning parameter, based on resampling from the asymptotic normal distribution of the given point-estimator. This tolerance estimator can be quickly computed even for any model for which it is computationally costly to sample directly from its exact likelihood, provided that its summary statistic is specified as consistent point-estimator of the model parameters with estimated asymptotic normal distribution that can typically be easily sampled from. Furthermore, this paper introduces and studies a new ABC method based on approximating the exact intractable likelihood by the asymptotic normal density of the point-estimator, motivated by the Bernstein-Von Mises theorem. Unlike the classical ABC method, this new approach does not require tuning parameters, aside from the summary statistic (the parameter point estimate). Each of the new ABC methods is illustrated and compared through a simulation study of tractable models and intractable likelihood models, and through the Bayesian intractable likelihood analysis of a real 23,000-node network dataset involving stochastic search model selection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. An Automated TW3-RUS Bone Age Assessment Method with Ordinal Regression-Based Determination of Skeletal Maturity.
- Author
-
Zhang, Dongxu, Liu, Bowen, Huang, Yulin, Yan, Yang, Li, Shaowei, He, Jinshui, Zhang, Shuyun, Zhang, Jun, and Xia, Ningshao
- Subjects
SKELETAL maturity ,DEEP learning ,BONE growth ,FINGERS ,RADIAL bone ,METACARPUS ,REGRESSION analysis ,ULNA ,DIAGNOSTIC imaging ,RESEARCH funding ,DESCRIPTIVE statistics ,COMPUTER-aided diagnosis ,ARTIFICIAL neural networks ,STATISTICAL models ,LOGISTIC regression analysis ,ALGORITHMS - Abstract
The assessment of bone age is important for evaluating child development, optimizing the treatment for endocrine diseases, etc. And the well-known Tanner-Whitehouse (TW) clinical method improves the quantitative description of skeletal development based on setting up a series of distinguishable stages for each bone individually. However, the assessment is affected by rater variability, which makes the assessment result not reliable enough in clinical practice. The main goal of this work is to achieve a reliable and accurate skeletal maturity determination by proposing an automated bone age assessment method called PEARLS, which is based on the TW3-RUS system (analysis of the radius, ulna, phalanges, and metacarpal bones). The proposed method comprises the point estimation of anchor (PEA) module for accurately localizing specific bones, the ranking learning (RL) module for producing a continuous stage representation of each bone by encoding the ordinal relationship between stage labels into the learning process, and the scoring (S) module for outputting the bone age directly based on two standard transform curves. The development of each module in PEARLS is based on different datasets. Finally, corresponding results are presented to evaluate the system performance in localizing specific bones, determining the skeletal maturity stage, and assessing the bone age. The mean average precision of point estimation is 86.29%, the average stage determination precision is 97.33% overall bones, and the average bone age assessment accuracy is 96.8% within 1 year for the female and male cohorts. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A Method to Improve Parameter Estimation Success
- Author
-
H. Unozkan
- Subjects
Parameter estimation ,point estimation ,statistics distribution ,statistical theory ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper introduces a method for improving parameter estimation in statistical models. Parameter estimation is a popular area of study in statistics, and recent years have seen the introduction of new distributions with more parameters to enhance modelling success. While finding a suitable model for a dataset is crucial, accurately estimating parameters is equally important. In some cases, classical parameter estimation methods fail to provide a closed form of estimation for parameters. As a result, researchers commonly resort to numerical methods and software programs for parameter estimation in models. The success rates of models have gained significance with the rising popularity of novel techniques like machine learning algorithms and artificial neural networks. Robust and reliable models are built on the strong theoretical foundations of statistical distributions. Specific distributions are used in various research fields to model datasets, and the assumptions associated with these distributions provide valuable insights into observations. Additionally, parameter estimation results sometimes lead researchers to direct conclusions. This paper presents an improvement method that relies on the estimation of parameters from other statistical distributions. This novel approach aims to make parameter estimation easier and more successful in certain situations. In the applications in this paper, the proposed methodology improves the success rate by up to 10% which provides an additional 6% success in the models.
- Published
- 2023
- Full Text
- View/download PDF
30. A Generalization of New Pareto-Type Distribution
- Author
-
Karakaya, Kadir, Akdoğan, Yunus, Nik, A. Saadati, Kuş, Coşkun, and Asgharzadeh, Akbar
- Published
- 2024
- Full Text
- View/download PDF
31. Multilevel Planning for Smart Charging Scheduling for On-Road Electric Vehicles Considering Seasonal Uncertainties
- Author
-
Das, Sourav, Pal, Arnab, Acharjee, Parimal, Chakraborty, Ajoy Kumar, Bhattacharya, Aniruddha, Rashid, Muhammad H., Series Editor, Bohre, Aashish Kumar, editor, Chaturvedi, Pradyumn, editor, Kolhe, Mohan Lal, editor, and Singh, Sri Niwas, editor
- Published
- 2022
- Full Text
- View/download PDF
32. Estimation and Testing Procedures for the Reliability Functions of Exponentiated Generalized Family of Distributions and a Characterization Based on Records
- Author
-
Kumari, Taruna, Pathak, Anupam, Pham, Hoang, Series Editor, Aggarwal, Anu G., editor, and Tandon, Abhishek, editor
- Published
- 2022
- Full Text
- View/download PDF
33. The Continuous Bernoulli Distribution: Mathematical Characterization, Fractile Regression, Computational Simulations, and Applications.
- Author
-
Korkmaz, Mustafa Ç., Leiva, Víctor, and Martin-Barreiro, Carlos
- Subjects
- *
BINOMIAL distribution , *CONTINUOUS distributions , *PROBABILITY density function , *QUANTILE regression , *SCIENCE education - Abstract
The continuous Bernoulli distribution is defined on the unit interval and has a unique property related to fractiles. A fractile is a position on a probability density function where the corresponding surface is a fixed proportion. This article presents the derivation of properties of the continuous Bernoulli distribution and formulates a fractile or quantile regression model for a unit response using the exponentiated continuous Bernoulli distribution. Monte Carlo simulation studies evaluate the performance of point and interval estimators for both the continuous Bernoulli distribution and the fractile regression model. Real-world datasets from science and education are analyzed to illustrate the modeling abilities of the continuous Bernoulli distribution and the exponentiated continuous Bernoulli quantile regression model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Upper record values from the generalized Pareto distribution and associated statistical inference.
- Author
-
Zhao, Xu, Wei, Shaojie, Cheng, Weihu, Zhang, Pengyue, Zhang, Yang, and Xu, Qi
- Subjects
- *
INFERENTIAL statistics , *MAXIMUM likelihood statistics , *PARETO distribution , *FIX-point estimation , *CONFIDENCE intervals - Abstract
We investigate point estimation and confidence interval estimation for the heavy-tailed generalized Pareto distribution (GPD) based on the upper record values. When the shape parameter is known, the bias-corrected moments estimators and maximum likelihood estimators (MLE) for the location and scale parameters are derived. However, in practice, the shape parameter is typically unknown. We propose the MLE by a new methodological approach for all three parameters of the heavy-tailed GPD when the shape parameter is unknown. Confidence intervals for the location and scale parameters are constructed by the equal probability density principle. If the shape parameter is known, we can find known distributions of the pivots of the location and scale parameters, but not approximate. While if the shape parameter is unknown, the distributions of the pivots are closed linked to an estimation of the shape parameter. The advantage of our method is that the proposed interval estimation provides the smallest confidence interval, regardless of whether the distribution of the pivot is symmetric. Extensive simulations are used to demonstrate the performance of the point estimation and confidence intervals estimation and show that our method outperforms the traditional technique in most cases. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. On the Normal Approximation of the Ratio of Means Estimation of Lognormal Distributions with Application to PM2.5 Concentrations in Northern Thailand.
- Author
-
Singhasomboon, Lapasrada and Piladaeng, Janjira
- Abstract
In this article, we present the point estimation procedures of the ratio of means of two independent lognormal distributions and investigate its accuracy properties. We apply the classical normal approximation procedure and derive the mean and variance parameters of the limiting gaussian distribution. The main criteria of the accuracy are the Bias and Mean Squared Error based on the Monte Carlo simulations. The PM2.5 datasets from two areas are used to illustrate the proposed method, which are consistent with our simulation results. Air pollution measurements of PM2.5 in Northern Thailand from January to April 2022 are examined; they have been identified as the most severe air pollution problem harming the health of people resident in the area. Within a given site, the PM2.5 datasets often follow a right-skewed distribution and are usually fitted by the lognormal model. The mean parameter is used to compute the average mass concentration of PM2.5 at the site and compare it in two different areas. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Point estimation for adaptive trial designs I: A methodological review.
- Author
-
Robertson, David S., Choodari‐Oskooei, Babak, Dimairo, Munya, Flight, Laura, Pallmann, Philip, and Jaki, Thomas
- Subjects
- *
FIX-point estimation , *FALSE positive error , *EXPERIMENTAL design , *ESTIMATION bias , *ERROR rates - Abstract
Recent FDA guidance on adaptive clinical trial designs defines bias as "a systematic tendency for the estimate of treatment effect to deviate from its true value," and states that it is desirable to obtain and report estimates of treatment effects that reduce or remove this bias. The conventional end‐of‐trial point estimates of the treatment effects are prone to bias in many adaptive designs, because they do not take into account the potential and realized trial adaptations. While much of the methodological developments on adaptive designs have tended to focus on control of type I error rates and power considerations, in contrast the question of biased estimation has received relatively less attention. This article is the first in a two‐part series that studies the issue of potential bias in point estimation for adaptive trials. Part I provides a comprehensive review of the methods to remove or reduce the potential bias in point estimation of treatment effects for adaptive designs, while part II illustrates how to implement these in practice and proposes a set of guidelines for trial statisticians. The methods reviewed in this article can be broadly classified into unbiased and bias‐reduced estimation, and we also provide a classification of estimators by the type of adaptive design. We compare the proposed methods, highlight available software and code, and discuss potential methodological gaps in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Adjusting for treatment selection in phase II/III clinical trials with time to event data.
- Author
-
Khan, Josephine N., Kimani, Peter K., Glimm, Ekkehard, and Stallard, Nigel
- Subjects
- *
TIME trials , *CLINICAL trials , *INVESTIGATIONAL therapies , *FIX-point estimation - Abstract
Phase II/III clinical trials are efficient two‐stage designs that test multiple experimental treatments. In stage 1, patients are allocated to the control and all experimental treatments, with the data collected from them used to select experimental treatments to continue to stage 2. Patients recruited in stage 2 are allocated to the selected treatments and the control. Combined data of stage 1 and stage 2 are used for a confirmatory phase III analysis. Appropriate analysis needs to adjust for selection bias of the stage 1 data. Point estimators exist for normally distributed outcome data. Extending these estimators to time to event data is not straightforward because treatment selection is based on correlated treatment effects and stage 1 patients who do not get events in stage 1 are followed‐up in stage 2. We have derived an approximately uniformly minimum variance conditional unbiased estimator (UMVCUE) and compared its biases and mean squared errors to existing bias adjusted estimators. In simulations, one existing bias adjusted estimator has similar properties as the practically unbiased UMVCUE while the others can have noticeable biases but they are less variable than the UMVCUE. For confirmatory phase II/III clinical trials where unbiased estimators are desired, we recommend the UMVCUE or the existing estimator with which it has similar properties. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Confidence intervals for the reliability characteristics via different estimation methods for the power Lindley model.
- Author
-
YADAV, ABHIMANYU S., VISHWAKARMA, P. K., BAKOUCH, H. S., KUMAR, UPENDRA, and CHAUHAN, S.
- Subjects
- *
CONFIDENCE intervals , *MAXIMUM likelihood statistics , *STATISTICAL bootstrapping - Abstract
In this article, classical and Bayes interval estimation procedures have been discussed for the reliability characteristics, namely mean time to system failure, reliability function, and hazard function for the power Lindley model and its special case. In the classical part, maximum likelihood estimation, maximum product spacing estimation are discussed to estimate the reliability characteristics. Since the computation of the exact confidence intervals for the reliability characteristics is not directly possible, then, using the large sample theory, the asymptotic confidence interval is constructed using the above-mentioned classical estimation methods. Further, the bootstrap (standard-boot, percentile-boot, students t-boot) confidence intervals are also obtained. Next, Bayes estimators are derived with a gamma prior using squared error loss function and linex loss function. The Bayes credible intervals for the same characteristics are constructed using simulated posterior samples. The obtained estimators are evaluated by the Monte Carlo simulation study in terms of mean square error, average width, and coverage probabilities. A real-life example has also been illustrated for the application purpose. [ABSTRACT FROM AUTHOR]
- Published
- 2022
39. GENERALIZED FIDUCIAL INFERENCE FOR THE CHEN DISTRIBUTION.
- Author
-
Çetinkaya, Çağatay
- Subjects
MAXIMUM likelihood statistics ,INFERENTIAL statistics - Abstract
The fiducial inference idea was firstly proposed by Fisher as a powerful method in statistical inference. Many authors such as Weeranhandi and Hannig et. al. improved this method from different points of view. Since the Bayesian method has some deficiencies such as assuming a prior distribution when there was little or no information about the parameters, the fiducial inference is used to overcome these adversities. This study deals with the generalized fiducial inference for the shape parameters of the Chen’s two-parameter lifetime distribution with bathtub shape or increasing failure rate . The method based on the inverse of the structural equation which is proposed by Hannig et. al. is used. We propose the generalized fiducial inferences of the parameters with their confidence intervals. Then, these estimations are compared with their maximum likelihood and Bayesian estimations. Simulation results show that the generalized fiducial inference is more applicable than the other methods in terms of the performances of estimators for the shape parameters of the Chen distribution. Finally, a real data example is used to illustrate the theoretical outcomes of these estimation procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
40. Parameter Estimation Procedures for Log Exponential-Power Distribution with Real Data Applications.
- Author
-
KORKMAZ, Mustafa Ç., KARAKAYA, Kadir, and AKDOĞA, Yunus
- Subjects
PARAMETER estimation ,DATA distribution ,MONTE Carlo method ,EXPONENTIAL families (Statistics) ,LEAST squares ,MAXIMUM likelihood statistics - Abstract
Copyright of Adiyaman University Journal of Science & Technology / Adıyaman Üniversitesi Fen Bilimleri Dergisi is the property of Adiyaman University, Institute of Science / Adiyaman Universitesi Fen Bilimleri Enstitusu and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
41. A nonparametric sequential learning procedure for estimating the pure premium.
- Author
-
Hu, Jun and Hong, Liang
- Abstract
With the advent of the "big" data era, large-sample properties of a statistical learning method are becoming more and more important in an actuary's daily work. For a fixed sample size, regardless of how large it is, the variance of an estimator can be larger than a pre-assigned level to an arbitrary extent. In this paper, we propose a nonparametric sequential learning procedure for estimating the pure premium. Our method not only provides an accurate estimate of the pure premium but also guarantees that the mean of our random sample sizes is close to the unobservable optimal fixed sample size and the variance of our estimator is close to all small pre-determined levels. In addition, our method is nonparametric and applicable to any claims distribution; hence it avoids potential issues associated with a parametric model such as model misspecification risk and the effect of selection. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Fundamentals of Statistical Inference
- Author
-
Pinheiro Cinelli, Lucas, Araújo Marins, Matheus, Barros da Silva, Eduardo Antúnio, Lima Netto, Sérgio, Cinelli, Lucas Pinheiro, Marins, Matheus Araújo, Barros da Silva, Eduardo Antônio, and Netto, Sérgio Lima
- Published
- 2021
- Full Text
- View/download PDF
43. Transmuted lower record type inverse rayleigh distribution: estimation, characterizations and applications.
- Author
-
Tanış, Caner
- Abstract
This study introduces a new lifetime distribution called the transmuted lower record type inverse Rayleigh which extends the inverse Rayleigh distribution and has the potential to model the recovery times of Covid-19 patients.The new distribution is obtained using the distributions of the first two lower record statistics of the inverse Rayleigh distribution. We discuss some statistical inferences and mathematical properties of the suggested distribution. We examine some characteristics of the proposed distribution such as density shape, hazard function,moments, moment generating function, incomplete moments,Rényi entropy, order statistics, stochastic ordering. We consider five estimation methods such as maximum likelihood, least squares, weighted least squares, Anderson-Darling, Cramér-von Mises for the point estimation of the proposed distribution. Then, a comprehensive Monte Carlo simulation study is carried out to assess the risk behavior of the examined estimators. We provide two real data applications to illustrate the fitting ability of the proposed model, and compare its fit with competitor ones. Unlike many previously proposed distributions, the introduced distribution in this paper has modeled the recovery times of Covid-19 patients. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Characterizations of the Maximum Likelihood Estimator of the Cauchy Distribution.
- Author
-
Okamura, K. and Otobe, Y.
- Abstract
This paper gives a new approach for the maximum likelihood estimation of the joint of the location and scale of the Cauchy distribution. We regard the joint as a single complex parameter and derive a new form of the likelihood equation of a complex variable. Based on the equation, we provide a new iterative scheme approximating the maximum likelihood estimate. We also handle the equation in an algebraic manner and derive a polynomial containing the maximum likelihood estimate as a root. This algebraic approach provides another scheme approximating the maximum likelihood estimate by root-finding algorithms for polynomials, and furthermore, gives the non-existence of closed-form formulae for the case that the sample size is five. We finally provide some numerical examples to show our method is effective. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Robustness of Minimum Density Power Divergence Estimators and Wald-type test statistics in loglinear models with multinomial sampling
- Author
-
Brugnano, Luigi, Efendiev, Yalchin, Keller, André, Calviño Martínez, Aída, Martín Apaolaza, Nirian, Pardo Llorente, Leandro, Brugnano, Luigi, Efendiev, Yalchin, Keller, André, Calviño Martínez, Aída, Martín Apaolaza, Nirian, and Pardo Llorente, Leandro
- Abstract
In this paper we propose a new family of estimators, Minimum Density Power Divergence Estimators (MDPDE), as a robust generalization of maximum likelihood estimators (MLE) for the loglinear model with multinomial sampling by using the Density Power Divergence (DPD) measure introduced by Basu et al. (1998). Based on these estimators, we further develop two types of confidence intervals (asymptotic and bootstrap ones), as well as a new robust family of Wald-type test statistics for testing a nested sequence of loglinear models. Furthermore, we study theoretically the robust properties of both the MDPDE as well as Wald-type tests through the classical influence function analysis. Finally, a simulation study provides further confirmation of the validity of the theoretical results established in the paper., Ministerio de Ciencia, Innovación y Universidades, Spain, Depto. de Estadística y Ciencia de los Datos, Fac. de Estudios Estadísticos, TRUE, pub
- Published
- 2024
46. Generalized Fiducial Inference for the Stress–Strength Reliability of Generalized Logistic Distribution
- Author
-
Menghan Li, Liang Yan, Yaru Qiao, Xia Cai, and Khamis K. Said
- Subjects
generalized fiducial inference ,stress–strength ,generalized logistic distribution ,point estimation ,interval estimation ,Mathematics ,QA1-939 - Abstract
Generalized logistic distribution, as the generalized form of the symmetric logistic distribution, plays an important role in reliability analysis. This article focuses on the statistical inference for the stress–strength parameter R=P(Y
- Published
- 2023
- Full Text
- View/download PDF
47. Compound transmuted family of distributions: properties and applications
- Author
-
Kuş, Coşkun, Karakaya, Kadir, Tanış, Caner, Akdoğan, Yunus, Sert, Sümeyra, and Kalkan, Fahreddin
- Published
- 2023
- Full Text
- View/download PDF
48. Bayesian inference with uncertain data of imprecise observations.
- Author
-
Yao, Kai
- Subjects
- *
BAYESIAN field theory , *BAYES' theorem , *FIX-point estimation , *DISTRIBUTION (Probability theory) - Abstract
Bayesian inference is a technique of statistical inference which uses the Bayes' theorem to update the probability distribution as new observed data are available. Uncertain variables are a tool of modeling imprecisely observed quantities associated with experiential information. By integrating Bayesian inference and uncertain variables, this paper proposes an approach of uncertain Bayesian inference to deal with Bayesian inference problems involving imprecise observations. The posterior distribution is derived which gives the probability distribution of an unknown parameter conditional on uncertain observations. And based on the posterior distribution, some inference problems including the point estimation, the interval estimation and the Bayesian prediction, are investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Fuzzy confidence interval construction and its application in recovery time for COVID-19 patients.
- Author
-
Parchami, A., Taheri, S. M., Falsafain, A., and Mashinchi, M.
- Subjects
COVID-19 ,STATISTICAL models ,FUZZY sets ,SMELL disorders ,CONFIDENCE intervals ,FIX-point estimation ,OLFACTORY perception - Abstract
An approach is proposed to construct fuzzy confidence intervals for unknown parameters in statistical models. In this approach, a family of confidence intervals of the unknown crisp parameters is considered. Such confidence intervals are used to obtain a fuzzy confidence interval for the parameter of interest. The proposed approach enjoys a wide range of confidence intervals to obtain a trapezoidal shaped fuzzy set of the parameter space as the fuzzy confidence interval for the parameter of interest. By using the resolution identity, it is shown that the constructed fuzzy confidence intervals are really fuzzy sets of the parameter space. Some numerical examples are provided to explain the functionality of the approach at one-sided and two-sided fuzzy confidence intervals. Moreover, the application of this proposed approach in health sciences is provided for the case of the recovery time of olfactory and gustatory dysfunctions for COVID-19 patients. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. A possibilistic analogue to Bayes estimation with fuzzy data and its application in machine learning.
- Author
-
Arefi, Mohsen, Viertl, Reinhard, and Taheri, S. Mahmoud
- Subjects
- *
BAYES' estimation , *FIX-point estimation , *CONCEPT learning , *STATISTICAL models , *LEARNING problems , *MACHINE learning - Abstract
A Bayesian approach in a possibilistic context, when the available data for the underlying statistical model are fuzzy, is developed. The problem of point estimation with fuzzy data is studied in the possibilistic Bayesian approach introduced. For calculating the point estimation, we introduce a method without considering a loss function, and one considering a loss function. For the point estimation with a loss function, we first define a risk function based on a possibilistic posterior distribution, and then the unknown parameter is estimated based on such a risk function. Briefly, the present work extended the previous works in two directions: First the underlying model is assumed to be probabilistic rather than possibilistic, and second is that the problem of Bayes estimation is developed based on two cases of without and with considering loss function. Then, the applicability of the proposed approach to concept learning is investigated. Particularly, a naive possibility Bayes classifier is introduced and applied to some real-world concept learning problems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.