1,650 results on '"Heteroscedasticity"'
Search Results
2. Inference and Local Influence Assessment in a Multifactor Skew-Normal Linear Mixed Model.
- Author
-
Najafi, Zeinolabedin, Zare, Karim, Mahmoudi, Mohammad Reza, Shokri, Soheil, and Mosavi, Amir
- Subjects
- *
MONTE Carlo method , *EXPECTATION-maximization algorithms , *HETEROSCEDASTICITY , *HOMOSCEDASTICITY , *SKEWNESS (Probability theory) , *APPLIED mathematics - Abstract
This work considers a multifactor linear mixed model under heteroscedasticity in random-effect factors and the skew-normal errors for modeling the correlated datasets. We implement an expectation–maximization (EM) algorithm to achieve the maximum likelihood estimates using conditional distributions of the skew-normal distribution. The EM algorithm is also implemented to extend the local influence approach under three model perturbation schemes in this model. Furthermore, a Monte Carlo simulation is conducted to evaluate the efficiency of the estimators. Finally, a real data set is used to make an illustrative comparison among the following four scenarios: normal/skew-normal errors and heteroscedasticity/homoscedasticity in random-effect factors. The empirical studies show our methodology can improve the estimates when the model errors follow from a skew-normal distribution. In addition, the local influence analysis indicates that our model can decrease the effects of anomalous observations in comparison to normal ones. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Estimation of treatment effects under endogenous heteroskedasticity
- Author
-
Haiqing Xu and Jason Abrevaya
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Average treatment effect ,Applied Mathematics ,05 social sciences ,Monte Carlo method ,Nonparametric statistics ,Estimator ,Inference ,Variance (accounting) ,01 natural sciences ,010104 statistics & probability ,Variable (computer science) ,0502 economics and business ,Econometrics ,0101 mathematics ,050205 econometrics ,Mathematics - Abstract
This paper considers a treatment effect model in which individual treatment effect may be heterogeneous, even among observationally identical individuals. Specifically, by extending the classical instrumental-variables (IV) model with an endogenous binary treatment, the heteroskedasticity of the error disturbance is allowed to vary with the treatment variable so that the treatment generates both mean and variance effect on the outcome. In this endogenous heteroskedasticity IV (EHIV) model, the standard IV estimator can be inconsistent for the average treatment effect (ATE) and lead to incorrect inference. After nonparametric identification is established, closed-form estimators are provided under the linear EHIV specification for the mean and variance treatment effect, as well as the average treatment effect on the treated (ATT). Asymptotic properties of the estimators are derived. We use Monte Carlo experiments to investigate the performance of the proposed approach and then consider an empirical application regarding the effect of fertility on female labor supply. Our findings demonstrate the importance of accounting for endogenous heteroskedasticity.
- Published
- 2023
- Full Text
- View/download PDF
4. Relaxing conditional independence in an endogenous binary response model
- Author
-
Alyssa Carlson
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Conditional independence ,Computer science ,Applied Mathematics ,Homoscedasticity ,Econometrics ,Asymptotic distribution ,Estimator ,Endogeneity ,Conditional expectation ,Control function - Abstract
For binary response models, the literature primarily addresses endogeneity by a control function approach assuming conditional independence (CF-CI). However, as the literature also notes, CF-CI implies conditions like homoskedasticity (of the latent error with respect to the instruments) that fail in many empirical settings. I propose an alternative approach that allows for heteroskedasticity, achieving identification with a conditional mean restriction. These identification results apply to a latent Gaussian error term with flexibly parametrized heteroskedasticity. I propose a two step conditional maximum likelihood estimator and derive its asymptotic distribution. In simulations, the new estimator outperforms others when CF-CI fails and is fairly robust to distributional misspecification.
- Published
- 2023
- Full Text
- View/download PDF
5. Second-order refinements for t-ratios with many instruments
- Author
-
Yukitoshi Matsushita and Taisuke Otsu
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Sample size determination ,Simultaneous equations ,Applied Mathematics ,Homoscedasticity ,Instrumental variable ,Null (mathematics) ,Applied mathematics ,Contrast (statistics) ,Estimator ,Mathematics - Abstract
This paper studies second-order properties of the many instruments robust t -ratios based on the limited information maximum likelihood and Fuller estimators for instrumental variable regression models with homoskedastic errors under the many instruments asymptotics, where the number of instruments may increase proportionally with the sample size n , and proposes second-order refinements to the t -ratios to improve the size and power properties. Based on asymptotic expansions of the null and non-null distributions of the t -ratios derived under the many instruments asymptotics, we show that the second-order terms of those expansions may have non-trivial impacts on the size as well as the power properties. Furthermore, we propose adjusted t -ratios whose approximation errors for the null rejection probabilities are of order O ( n − 1 ) in contrast to the ones for the unadjusted t -ratios of order O ( n − 1 / 2 ) , and show that these adjustments induce some desirable power properties in terms of the local maximinity. Although these results are derived under homoskedastic errors, we also establish a stochastic expansion for a heteroskedasticity robust t -ratio, and propose an analogous adjustment under slight deviations from homoskedasticity.
- Published
- 2023
- Full Text
- View/download PDF
6. A simple joint model for returns, volatility and volatility of volatility
- Author
-
Yashuang (Dexter) Ding
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Applied Mathematics ,Gaussian ,Estimator ,Asset return ,symbols.namesake ,Discrete time and continuous time ,Simple joint ,symbols ,Economics ,Econometrics ,Volatility (finance) ,Empirical evidence - Abstract
We propose a model that allows for conditional heteroskedasticity in the volatility of asset returns and incorporates current return information into the volatility nowcast and forecast. Our model can capture all stylised facts of asset returns even with Gaussian innovations and is simple to implement. Moreover, we show that our model converges weakly to the GARCH-type diffusion as the length of the discrete time intervals between observations goes to zero. Empirical evidence shows that our model has a better fit, a more efficient parameter estimator as well as more accurate volatility and VaR forecasts than other common GARCH-type models.
- Published
- 2023
- Full Text
- View/download PDF
7. Bias-corrected method of moments estimators for dynamic panel data models
- Author
-
Sebastian Kripfganz, Kazuhiko Hayakawa, and Jörg Breitung
- Subjects
Statistics and Probability ,Moment (mathematics) ,Economics and Econometrics ,Heteroscedasticity ,Autoregressive model ,Monte Carlo method ,Estimator ,Applied mathematics ,Statistics, Probability and Uncertainty ,Method of moments (statistics) ,Random effects model ,Panel data ,Mathematics - Abstract
A computationally simple bias correction for linear dynamic panel data models is proposed and its asymptotic properties are studied when the number of time periods is fixed or tends to infinity with the number of panel units. The approach can accommodate both fixed-effects and random-effects assumptions, heteroskedastic errors, as well as higher-order autoregressive models. Panel-corrected standard errors are proposed that allow for robust inference in dynamic models with cross-sectionally correlated errors. Monte Carlo experiments suggest that under the assumption of strictly exogenous regressors the bias-corrected method of moment estimator outperforms popular GMM estimators in terms of efficiency and correctly sized tests.
- Published
- 2022
- Full Text
- View/download PDF
8. Convergence of spectral density estimators in the locally stationary framework
- Author
-
Rafael Kawka
- Subjects
Statistics and Probability ,Economics and Econometrics ,Heteroscedasticity ,Stationary process ,05 social sciences ,Estimator ,Spectral density ,Covariance ,01 natural sciences ,010104 statistics & probability ,Simple (abstract algebra) ,Kernel (statistics) ,0502 economics and business ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,050205 econometrics ,Mathematics - Abstract
Asymptotic properties of classical kernel estimators for the spectral density are studied in the locally stationary framework. In particular, it is shown that for a locally stationary process standard spectral density estimators consistently estimate the time-averaged spectral density. This result is complemented by some illustrative examples and applications including HAC-inference in the multiple linear regression model, a simple visual tool for the detection of unconditional heteroskedasticity and a test for covariance stationarity.
- Published
- 2022
- Full Text
- View/download PDF
9. Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors
- Author
-
Yuanyuan Lin, Guohao Shen, Jian Huang, and Dongxiao Han
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Huber loss ,Applied Mathematics ,Linear regression ,Linear model ,Inference ,Applied mathematics ,Estimator ,Regression ,Statistical hypothesis testing ,Mathematics - Abstract
We propose a robust post-selection inference method based on the Huber loss for the regression coefficients, when the error distribution is heavy-tailed and asymmetric in a high-dimensional linear model with an intercept term. The asymptotic properties of the resulting estimators are established under mild conditions. We also extend the proposed method to accommodate heteroscedasticity assuming the error terms are symmetric and other suitable conditions. Statistical tests for low-dimensional parameters or individual coefficient in the high-dimensional linear model are also studied. Simulation studies demonstrate desirable properties of the proposed method. An application to a genomic dataset about riboflavin production rate is provided.
- Published
- 2022
- Full Text
- View/download PDF
10. Universal Prediction Band via Semi-Definite Programming
- Author
-
Tengyuan Liang
- Subjects
FOS: Computer and information sciences ,Semidefinite programming ,Statistics and Probability ,Heteroscedasticity ,Computer Science - Machine Learning ,Econometrics (econ.EM) ,Nonparametric statistics ,Explained sum of squares ,Regular polygon ,Machine Learning (stat.ML) ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,Variance (accounting) ,Machine Learning (cs.LG) ,FOS: Economics and business ,Optimization and Control (math.OC) ,Statistics - Machine Learning ,FOS: Mathematics ,Applied mathematics ,Uncertainty quantification ,Statistics, Probability and Uncertainty ,Mathematics - Optimization and Control ,Mathematics ,Interpolation ,Economics - Econometrics - Abstract
We propose a computationally efficient method to construct nonparametric, heteroscedastic prediction bands for uncertainty quantification, with or without any user-specified predictive model. Our approach provides an alternative to the now-standard conformal prediction for uncertainty quantification, with novel theoretical insights and computational advantages. The data-adaptive prediction band is universally applicable with minimal distributional assumptions, has strong non-asymptotic coverage properties, and is easy to implement using standard convex programs. Our approach can be viewed as a novel variance interpolation with confidence and further leverages techniques from semi-definite programming and sum-of-squares optimization. Theoretical and numerical performances for the proposed approach for uncertainty quantification are analyzed., 21 pages, 4 figures
- Published
- 2022
- Full Text
- View/download PDF
11. Statistical analysis of concentric objects estimation problem under the heteroscedastic Berman model
- Author
-
Ali Al-Sharadqah and Phuc Nguyen
- Subjects
Statistics and Probability ,Estimation ,Nonlinear system ,Heteroscedasticity ,Data point ,Applied Mathematics ,Covariate ,Estimator ,Statistics, Probability and Uncertainty ,Concentric ,Ellipse ,Algorithm ,Mathematics - Abstract
The problem of fitting two concentric objects (circles and ellipses) to noisy data plays a vital role in many fields. In this paper, statistical methodologies are deployed by additionally assuming that the angular differences between successively measured data points are known, which is known as Berman’s model. The heteroscedasticity between covariates is also assumed here, and hence, several estimators for the problem of fitting concentric circles and ellipses are developed and their statistical properties are established. Unlike concentric circles, the problem of fitting concentric ellipses turns out to be nonlinear even with Berman’s assumption, and as such, iterative algorithms are implemented. Extensive numerical experiments were conducted to validate our results.
- Published
- 2022
- Full Text
- View/download PDF
12. Efficient estimation of high-dimensional dynamic covariance by risk factor mapping: Applications for financial risk management
- Author
-
Thomas W.C. Chan, Amanda M. Y. Chu, and Mike K. P. So
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Computer science ,Applied Mathematics ,Risk measure ,05 social sciences ,Financial risk management ,Covariance ,01 natural sciences ,Copula (probability theory) ,010104 statistics & probability ,Autoregressive model ,0502 economics and business ,Econometrics ,Portfolio ,Stock market ,0101 mathematics ,050205 econometrics - Abstract
This paper aims to explore a modified method of high-dimensional dynamic variance–covariance matrix estimation via risk factor mapping, which can yield a dependence estimation of asset returns within a large portfolio with high computational efficiency. The essence of our methodology is to express the time-varying dependence of high-dimensional return variables using the co-movement concept of returns with respect to risk factors. A novelty of the proposed methodology is to allow mapping matrices, which govern the co-movement of returns, to be time-varying. We also consider the flexible modeling of risk factors by a copula multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) model. Through the proposed risk factor mapping model, the number of parameters and the time complexity are functions of a small number of risk factors instead of the number of stocks in the portfolio, making our proposed methodology highly scalable. We adopt Bayesian methods to estimate unknown parameters and various risk measures in the proposed model. The proposed risk mapping method and financial applications are demonstrated by an empirical study of the Hong Kong stock market. The assessment of the effectiveness of the mapping via risk measure estimation is also discussed.
- Published
- 2022
- Full Text
- View/download PDF
13. An indirect proof for the asymptotic properties of VARMA model estimators
- Author
-
Guy Melard
- Subjects
Statistics and Probability ,Economics and Econometrics ,Heteroscedasticity ,Series (mathematics) ,Gaussian ,05 social sciences ,Ergodicity ,Strong consistency ,Estimator ,Asymptotic distribution ,Context (language use) ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,0502 economics and business ,symbols ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,050205 econometrics ,Mathematics - Abstract
Strong consistency and asymptotic normality of a Gaussian quasi-maximum likelihood estimator for the parameters of a causal, invertible, and identifiable vector autoregressive-moving average (VARMA) model are established in an indirect way. The proof is based on similar results for a much wider class of VARMA models with time-dependent coefficients, hence in the context of non-stationary and heteroscedastic time series. For that reason, the proof avoids spectral analysis arguments and does not make use of ergodicity. The results presented are also applicable to ARMA models.
- Published
- 2022
- Full Text
- View/download PDF
14. Jackknife estimation of a cluster-sample IV regression model with many weak instruments
- Author
-
Norman R. Swanson, Tiemen Woutersen, and John C. Chao
- Subjects
Statistics::Theory ,History ,Heteroscedasticity ,Economics and Econometrics ,Polymers and Plastics ,Applied Mathematics ,Null (mathematics) ,Instrumental variable ,Estimator ,Asymptotic distribution ,Regression analysis ,Industrial and Manufacturing Engineering ,Econometrics ,Statistics::Methodology ,Cluster sampling ,Business and International Management ,Jackknife resampling ,Mathematics - Abstract
This paper proposes new jackknife IV estimators that are robust to the effects of many weak instruments and error heteroskedasticity in a cluster sample setting with cluster-specific effects and possibly many included exogenous regressors. The estimators that we propose are designed to properly partial out the cluster-specific effects and included exogenous regressors while preserving the re-centering property of the jackknife methodology. To the best of our knowledge, our proposed procedures provide the first consistent estimators under many weak instrument asymptotics in the setting considered. We also present results on the asymptotic normality of our estimators and show that t-statistics based on our estimators are asymptotically normal under the null and consistent under fixed alternatives. Our Monte Carlo results further show that our t-statistics perform better in controlling size in finite samples than those based on alternative jackknife IV procedures previously introduced in the literature.
- Published
- 2023
- Full Text
- View/download PDF
15. Regularized estimation of high‐dimensional vector autoregressions with weakly dependent innovations
- Author
-
Marcelo C. Medeiros, Ricardo Masini, and Eduardo F. Mendes
- Subjects
Statistics and Probability ,Heteroscedasticity ,Current (mathematics) ,Series (mathematics) ,Applied Mathematics ,Gaussian ,Context (language use) ,symbols.namesake ,Lasso (statistics) ,Mixing (mathematics) ,symbols ,Applied mathematics ,Statistics, Probability and Uncertainty ,Conditional variance ,Mathematics - Abstract
There has been considerable advance in understanding the properties of sparse regularization procedures in high-dimensional models. In time series context, it is mostly restricted to Gaussian autoregressions or mixing sequences. We study oracle properties of LASSO estimation of weakly sparse vector-autoregressive models with heavy tailed, weakly dependent innovations with virtually no assumption on the conditional heteroskedasticity. In contrast to current literature, our innovation process satisfy an $L^1$ mixingale type condition on the centered conditional covariance matrices. This condition covers $L^1$-NED sequences and strong ($\alpha$-) mixing sequences as particular examples.
- Published
- 2021
- Full Text
- View/download PDF
16. Periodic negative binomial INGARCH(1, 1) model
- Author
-
Abderrahmen Manaa and Mohamed Bentarzi
- Subjects
Statistics and Probability ,Statistics::Theory ,Heteroscedasticity ,Class (set theory) ,Statistics::Applications ,Autoregressive model ,Modeling and Simulation ,Statistics ,Probabilistic logic ,Negative binomial distribution ,Statistics::Methodology ,Applied mathematics ,Mathematics - Abstract
In this paper, we introduce a class of periodic negative binomial integer-valued generalized autoregressive conditional heteroskedastic model. The basic probabilistic and statistical properties of ...
- Published
- 2021
- Full Text
- View/download PDF
17. Tests for heteroskedasticity in transformation models
- Author
-
Charl Pretorius, Simos G. Meintanis, and Marie Hušková
- Subjects
Statistics and Probability ,Heteroscedasticity ,Transformation (function) ,Homoscedasticity ,Test statistic ,Null distribution ,Applied mathematics ,Limit (mathematics) ,Statistics, Probability and Uncertainty ,Null hypothesis ,Statistical hypothesis testing ,Mathematics - Abstract
We consider a model whereby a given response variable Y following a transformation $${{\mathcal {Y}}}:=\mathcal {T}(Y)$$ , satisfies some classical regression equation. In this transformation model the form of the transformation is specified analytically but incorporates an unknown transformation parameter. We develop testing procedures for the null hypothesis of homoskedasticity for versions of this model where the regression function is considered either known or unknown. The test statistics are formulated on the basis of Fourier-type conditional contrasts of a variance computed under the null hypothesis against the same quantity computed under alternatives. The limit null distribution of the test statistic is studied, as well as the behaviour of the test criterion under alternatives. Since the limit null distribution is complicated, a bootstrap version is suggested in order to actually carry out the test procedures. Monte Carlo results are included that illustrate the finite-sample properties of the new method. The applicability of the new tests on real data is also illustrated.
- Published
- 2021
- Full Text
- View/download PDF
18. Kernel-based Volatility Generalised Least Squares
- Author
-
George Kapetanios, Ilias Chronopoulos, and Katerina Petrova
- Subjects
Statistics and Probability ,Economics and Econometrics ,Heteroscedasticity ,Stochastic volatility ,05 social sciences ,Estimator ,Asymptotic distribution ,01 natural sciences ,Least squares ,010104 statistics & probability ,Standard error ,0502 economics and business ,Linear regression ,Statistics::Methodology ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Volatility (finance) ,050205 econometrics ,Mathematics - Abstract
The problem of inference in a standard linear regression model with heteroskedastic errors is investigated. A GLS estimator which is based on a nonparametric kernel estimator is proposed for the volatility process. It is shown that the resulting feasible GLS estimator is T -consistent for a wide range of deterministic and stochastic processes for the time-varying volatility. Moreover, the kernel-GLS estimator is asymptotically more efficient than OLS and hence inference based on its asymptotic distribution is sharper. A Monte Carlo exercise is designed to study the finite sample properties of the proposed estimator and it is shown that tests based on it are correctly-sized for a variety of DGPs. As expected, it is found that in some cases, testing based on OLS is invalid. Crucially, even in cases when tests based on OLS or OLS with heteroskedasticity-consistent (HC) standard errors are correctly-sized, it is found that inference based on the proposed GLS estimator is more powerful even for relatively small sample sizes.
- Published
- 2021
- Full Text
- View/download PDF
19. Diagnostic tests for homoskedasticity in spatial cross-sectional or panel models
- Author
-
Badi H. Baltagi, Alain Pirotte, and Zhenlin Yang
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Econometric model ,Applied Mathematics ,Homoscedasticity ,Monte Carlo method ,Applied mathematics ,Martingale difference sequence ,Sample (statistics) ,Variance (accounting) ,Function (mathematics) ,Mathematics - Abstract
We propose an Adjusted Quasi-Score (AQS) method for constructing tests for homoskedasticity in spatial econometric models. We first obtain an AQS function by adjusting the score-type function from the given model to achieve unbiasedness, and then develop an Outer-Product-of-Martingale-Difference (OPMD) estimate of its variance. In standard problems where a genuine (quasi) score vector is available, the AQS–OPMD method leads to finite sample improved tests over the usual methods. More importantly in non-standard problems where a genuine (quasi) score is not available and the usual methods fail, the proposed AQS–OPMD method provides feasible solutions. The AQS tests are formally derived and asymptotic properties examined for three representative models: spatial cross-sectional, static and dynamic panel models. Monte Carlo results show that the proposed AQS tests have good finite sample properties.
- Published
- 2021
- Full Text
- View/download PDF
20. A Statistical Perspective on the Challenges in Molecular Microbial Biology
- Author
-
Susan Holmes and Pratheepa Jeganathan
- Subjects
2. Zero hunger ,0106 biological sciences ,Statistics and Probability ,Topic model ,Heteroscedasticity ,Intersection (set theory) ,Applied Mathematics ,fungi ,Perspective (graphical) ,Sampling (statistics) ,010603 evolutionary biology ,01 natural sciences ,Agricultural and Biological Sciences (miscellaneous) ,Data science ,Article ,Visualization ,010104 statistics & probability ,13. Climate action ,Nonparametric bayesian ,0101 mathematics ,Statistics, Probability and Uncertainty ,Uncertainty quantification ,General Agricultural and Biological Sciences ,General Environmental Science - Abstract
High throughput sequencing (HTS)-based technology enables identifying and quantifying non-culturable microbial organisms in all environments. Microbial sequences have enhanced our understanding of the human microbiome, the soil and plant environment, and the marine environment. All molecular microbial data pose statistical challenges due to contamination sequences from reagents, batch effects, unequal sampling, and undetected taxa. Technical biases and heteroscedasticity have the strongest effects, but different strains across subjects and environments also make direct differential abundance testing unwieldy. We provide an introduction to a few statistical tools that can overcome some of these difficulties and demonstrate those tools on an example. We show how standard statistical methods, such as simple hierarchical mixture and topic models, can facilitate inferences on latent microbial communities. We also review some nonparametric Bayesian approaches that combine visualization and uncertainty quantification. The intersection of molecular microbial biology and statistics is an exciting new venue. Finally, we list some of the important open problems that would benefit from more careful statistical method development.
- Published
- 2022
21. A test for heteroscedasticity in functional linear models
- Author
-
Pramita Bagchi and James Cameron
- Subjects
Statistics and Probability ,Statistics::Theory ,Heteroscedasticity ,Null (mathematics) ,Linear model ,Estimator ,Variance (accounting) ,Measure (mathematics) ,Homoscedasticity ,Statistics::Methodology ,Applied mathematics ,Statistics, Probability and Uncertainty ,Constant (mathematics) ,Mathematics - Abstract
We propose a new test to validate the assumption of homoscedasticity in a functional linear model. We consider a minimum distance measure of heteroscedasticity in functional data, which is zero in the case where the variance is constant and positive otherwise. We derive an explicit form of the measure, propose an estimator for the quantity, and show that an appropriately standardized version of the estimator is asymptotically normally distributed under both the null (homoscedasticity) and alternative hypotheses. We extend this result for residuals from functional linear models and develop a bootstrap diagnostic test for the presence of heteroscedasticity under the postulated model. Moreover, our approach also allows testing for “relevant” deviations from the homoscedastic variance structure and constructing confidence intervals for the proposed measure. We investigate the performance of our method using extensive numerical simulations and a data example.
- Published
- 2021
- Full Text
- View/download PDF
22. D-Optimal Designs for Hierarchical Linear Models with Heteroscedastic Errors
- Author
-
Xin Liu, Kashinath Chatterjee, and Rong-Xian Yue
- Subjects
Statistics and Probability ,Optimal design ,Computational Mathematics ,Heteroscedasticity ,Applied Mathematics ,Multilevel model ,Applied mathematics ,Simple linear model ,Mean squared error matrix ,D optimal ,Equivalence (measure theory) ,Mathematics - Abstract
This paper investigates the optimal design problem for the prediction of the individual parameters in hierarchical linear models with heteroscedastic errors. An equivalence theorem is established to characterize D-optimality of designs for the prediction based on the mean squared error matrix. The admissibility of designs is also considered and a sufficient condition to simplify the design problem is obtained. The results obtained are illustrated in terms of a simple linear model with random slope and heteroscedastic errors.
- Published
- 2021
- Full Text
- View/download PDF
23. Structural change tests under heteroskedasticity: Joint estimation versus two‐steps methods
- Author
-
Pierre Perron and Yohei Yamamoto
- Subjects
Statistics and Probability ,Estimation ,Heteroscedasticity ,Applied Mathematics ,Likelihood-ratio test ,Cusum test ,Statistics ,Statistics, Probability and Uncertainty ,U-statistic ,Joint (geology) ,Mathematics - Published
- 2021
- Full Text
- View/download PDF
24. Dynamic covariance modeling with artificial neural networks
- Author
-
Amanda M. Y. Chu, Mike K. P. So, and Wing Ki Liu
- Subjects
Statistics and Probability ,Multivariate statistics ,Heteroscedasticity ,Autoregressive model ,Artificial neural network ,Computer science ,Covariance matrix ,Applied Mathematics ,Autoregressive conditional heteroskedasticity ,Applied mathematics ,Covariance ,Analysis ,Cholesky decomposition - Abstract
This article proposes a novel multivariate generalized autoregressive conditionally heteroscedastic (GARCH) model that incorporates the modified Cholesky decomposition for a covariance matrix in or...
- Published
- 2021
- Full Text
- View/download PDF
25. Commodity prices at the quantiles
- Author
-
Marilena Furno and Furno, Marilena
- Subjects
Statistics and Probability ,quantile regression ,Applied Mathematics ,Economics ,Econometrics ,Unit root tests ,Commodity (Marxism) ,Analysis ,heteroscedasticity ,Quantile - Abstract
Commodity price stationarity grants good predictability and effective policy intervention. The analysis at the quantiles shows that prices are stationary at the lower but not at the higher quantiles. The non-stationarity of commodity prices is often related to the behavior of the US exchange rate series. Quantile regression estimates show changing coefficients across quantiles. This signals the presence of heteroscedasticity in the series. Stationarity holds throughout once the series are corrected for heteroscedasticity, exchange rate included. This finding weakens the claim that exchange rates cause persistence in commodity prices. The price-exchange rate correlation mirrors their common heteroscedasticity. The correlation declines in the cleaned/homoscedastic series.
- Published
- 2021
- Full Text
- View/download PDF
26. Variable selection via quantile regression with the process of Ornstein-Uhlenbeck type
- Author
-
Yinfeng Wang and XinSheng Zhang
- Subjects
Statistics::Theory ,Heteroscedasticity ,General Mathematics ,Linear model ,Estimator ,Asymptotic distribution ,Ornstein–Uhlenbeck process ,Statistics::Computation ,Quantile regression ,Lasso (statistics) ,Statistics::Methodology ,Applied mathematics ,Mathematics ,Quantile - Abstract
Based on the data-cutoff method, we study quantile regression in linear models, where the noise process is of Ornstein-Uhlenbeck type with possible jumps. In single-level quantile regression, we allow the noise process to be heteroscedastic, while in composite quantile regression, we require that the noise process be homoscedastic so that the slopes are invariant across quantiles. Similar to the independent noise case, the proposed quantile estimators are root-n consistent and asymptotic normal. Furthermore, the adaptive least absolute shrinkage and selection operator (LASSO) is applied for the purpose of variable selection. As a result, the quantile estimators are consistent in variable selection, and the nonzero coefficient estimators enjoy the same asymptotic distribution as their counterparts under the true model. Extensive numerical simulations are conducted to evaluate the performance of the proposed approaches and foreign exchange rate data are analyzed for the illustration purpose.
- Published
- 2021
- Full Text
- View/download PDF
27. Dynamic spatial panel data models with common shocks
- Author
-
Jushan Bai and Kunpeng Li
- Subjects
Statistics::Theory ,Economics and Econometrics ,Heteroscedasticity ,Applied Mathematics ,media_common.quotation_subject ,Monte Carlo method ,Estimator ,Variance (accounting) ,Asymptotic theory (statistics) ,Homoscedasticity ,Econometrics ,Normality ,Panel data ,Mathematics ,media_common - Abstract
This paper studies dynamic spatial panel data models with common shocks to deal with both weak and strong cross-sectional correlations. Weak correlations are captured by a spatial structure and strong correlations are captured by a factor structure. The proposed quasi-maximum likelihood estimator (QMLE) is capable of handling both types of cross sectional dependence. We provide a rigorous analysis for the asymptotic theory of the QMLE, demonstrating its desirable properties. Heteroskedasticity is explicitly allowed. This is important because QML is inconsistent in the presence of heteroskedasticity while homoskedasticity is imposed. We further show that when heteroskedasticity is estimated, the limiting variance of QMLE is not a sandwich form regardless of normality. Monte Carlo simulations show that the QMLE has good finite sample properties.
- Published
- 2021
- Full Text
- View/download PDF
28. Mixing ARMA Models with EGARCH Models and Using it in Modeling and Analyzing the Time Series of Temperature
- Author
-
Abduljabbar Ali Mudhir
- Subjects
Mixed model ,Heteroscedasticity ,General Computer Science ,Mixing (mathematics) ,Series (mathematics) ,Autoregressive conditional heteroskedasticity ,Applied mathematics ,General Chemistry ,Variance (accounting) ,Time series ,Conditional variance ,General Biochemistry, Genetics and Molecular Biology ,Mathematics - Abstract
In this article our goal is mixing ARMA models with EGARCH models and composing a mixed model ARMA(R,M)-EGARCH(Q,P) with two steps, the first step includes modeling the data series by using EGARCH model alone interspersed with steps of detecting the heteroscedasticity effect and estimating the model's parameters and check the adequacy of the model. Also we are predicting the conditional variance and verifying it's convergence to the unconditional variance value. The second step includes mixing ARMA with EGARCH and using the mixed (composite) model in modeling time series data and predict future values then asses the prediction ability of the proposed model by using prediction error criterions.
- Published
- 2021
- Full Text
- View/download PDF
29. A modified one stage multiple comparison procedure of exponential location parameters with the control under heteroscedasticity
- Author
-
Shu-Fei Wu
- Subjects
Statistics and Probability ,Heteroscedasticity ,Exponential distribution ,Multiple comparison procedure ,One stage ,Applied mathematics ,Control (linguistics) ,Mathematics ,Exponential function - Abstract
In this paper, we present a modified one-stage multiple comparison procedure for exponential location parameters with the control under heteroscedasticity including one-sided and two-sided confiden...
- Published
- 2021
- Full Text
- View/download PDF
30. Distance-covariance-based tests for heteroscedasticity in nonlinear regressions
- Author
-
Mingxiang Cao and Kai Xu
- Subjects
Score test ,Heteroscedasticity ,General Mathematics ,Null (mathematics) ,Applied mathematics ,Regression analysis ,Covariance ,Nonlinear regression ,Statistical hypothesis testing ,Mathematics ,Parametric statistics - Abstract
We use distance covariance to introduce novel consistent tests of heteroscedasticity for nonlinear regression models in multidimensional spaces. The proposed tests require no user-defined regularization, which are simple to implement based on only pairwise distances between points in the sample and are applicable even if we have non-normal errors and many covariates in the regression model. We establish the asymptotic distributions of the proposed test statistics under the null and alternative hypotheses and a sequence of local alternatives converging to the null at the fastest possible parametric rate. In particular, we focus on whether and how the estimation of the finite-dimensional unknown parameter vector in regression functions will affect the distribution theory. It turns out that the asymptotic null distributions of the suggested test statistics depend on the data generating process, and then a bootstrap scheme and its validity are considered. Simulation studies demonstrate the versatility of our tests in comparison with the score test, the Cramer-von Mises test, the Kolmogorov-Smirnov test and the Zheng-type test. We also use the ultrasonic reference block data set from National Institute for Standards and Technology of USA to illustrate the practicability of our proposals.
- Published
- 2021
- Full Text
- View/download PDF
31. Cross-sample entropy estimation for time series analysis: a nonparametric approach
- Author
-
Ignacio Ramírez-Parietti, Byron J. Idrovo-Aguirre, and Javier E. Contreras-Reyes
- Subjects
Heteroscedasticity ,Series (mathematics) ,Generalization ,Applied Mathematics ,Mechanical Engineering ,Nonparametric statistics ,Aerospace Engineering ,Estimator ,Ocean Engineering ,Sample entropy ,Control and Systems Engineering ,Econometrics ,Entropy (information theory) ,Electrical and Electronic Engineering ,Time series ,Mathematics - Abstract
Cross-sample entropy (CSE) allows to analyze the association level between two time series that are not necessarily stationary. The current criteria to estimate the CSE are based on the normality assumption, but this condition is not necessarily satisfied in reality. Also, CSE calculation is based on a tolerance and an embedding dimension parameter, which are defined rather subjectively. In this paper, we define a new way of estimating the CSE with a nonparametric approach. Specifically, a residual-based bootstrap-type estimator is considered for long-memory and heteroskedastic models. Subsequently, the established criteria are redefined for the approach of interest for generalization purposes. Finally, a simulation study serves to evaluate the performance of this estimation technique. An application to foreign exchange market data before and after the 1999 Asian financial crisis was considered to study the synchrony level of the CAD/USD and SGD/USD foreign exchange rate time series. A bootstrap-type method allowed to obtain a more realistic estimation of the cross-sample entropy (CSE) statistics. Specifically, estimated CSE was slightly different than that obtained in previous studies, but for both periods the synchrony level using CSE between the time series was higher after the 1999 Asian financial crisis.
- Published
- 2021
- Full Text
- View/download PDF
32. Jump-preserving varying-coefficient models for nonlinear time series
- Author
-
Chao Koo, Pavel Cizek, Research Group: Econometrics, Econometrics and Operations Research, and Center Ph. D. Students
- Subjects
Statistics and Probability ,Economics and Econometrics ,Mathematical optimization ,Heteroscedasticity ,nonlinear time series ,Asymptotic distribution ,Classification of discontinuities ,discontinuity ,01 natural sciences ,010104 statistics & probability ,0502 economics and business ,Applied mathematics ,0101 mathematics ,Finite set ,varying-coefficient models ,050205 econometrics ,Mathematics ,Series (mathematics) ,05 social sciences ,Nonparametric statistics ,Estimator ,Discontinuity (linguistics) ,local linear fitting ,asymptotics ,Piecewise ,Statistics, Probability and Uncertainty ,heteroskedasticity - Abstract
An important and widely used class of semiparametric models is formed by the varying-coefficient models. Although the varying coefficients are traditionally assumed to be smooth functions, the varying-coefficient model is considered here with the coefficient functions containing a finite set of discontinuities. Contrary to the existing nonparametric and varying-coefficient estimation of piecewise smooth functions, the varying-coefficient models are considered here under dependence and are applicable in time series with heteroskedastic and serially correlated errors. Additionally, the conditional error variance is allowed to exhibit discontinuities at a finite set of points too. The (uniform) consistency and asymptotic normality of the proposed estimators are established and the finite-sample performance is tested via a simulation study and in a real-data example.
- Published
- 2021
- Full Text
- View/download PDF
33. Integer‐valued asymmetric garch modeling
- Author
-
Xiaofei Hu and Beth Andrews
- Subjects
Statistics and Probability ,Statistics::Theory ,Heteroscedasticity ,Applied Mathematics ,Autoregressive conditional heteroskedasticity ,Asymptotic distribution ,Estimator ,Parameter space ,Poisson distribution ,symbols.namesake ,symbols ,Statistics::Methodology ,Ergodic theory ,Applied mathematics ,Statistics, Probability and Uncertainty ,Conditional variance ,Mathematics - Abstract
We propose a GARCH model for uncorrelated, integer‐valued time series that exhibit conditional heteroskedasticity. Conditioned on past information, these observations have a two‐sided Poisson distribution with time‐varying variance. Positive and negative observations can have an asymmetric impact on conditional variance. We give conditions under which the proposed integer‐valued GARCH process is stationary, ergodic, and has finite moments. We consider maximum likelihood estimation for model parameters, and we give the limiting distribution for these estimators when the true parameter vector is in the interior of its parameter space, and when some GARCH coefficients are zero.
- Published
- 2021
- Full Text
- View/download PDF
34. Uniformly implementable small sample integrated likelihood ratio test for one-way and two-way ANOVA under heteroscedasticity and normality
- Author
-
Swapnil M. Patil and H. V. Kulkarni
- Subjects
Statistics and Probability ,Economics and Econometrics ,Heteroscedasticity ,Computer science ,Group (mathematics) ,Applied Mathematics ,media_common.quotation_subject ,Ratio test ,Two-way analysis of variance ,Marginal likelihood ,Simple (abstract algebra) ,Modeling and Simulation ,Statistics ,Analysis of variance ,Social Sciences (miscellaneous) ,Analysis ,Normality ,media_common - Abstract
ANOVA under normally distributed response and heteroscedastic variances is commonly encountered in biological, behavioral, educational and agricultural sciences where the commonly used F-test is not valid. Many alternatives suggested in the literature exhibit unsatisfactory performance with respect to the type-I errors notably under a large number of small size groups. This has a direct bearing on their power performance. Anticipating that a major cause may be the existence of a large number of unknown unequal group variances as nuisance parameters, the present work attempts to provide a uniformly implementable simple solution that addresses this problem through the use of likelihood integration with respect to the nuisance parameters. The second-order accurate asymptotic $$\chi ^2$$ distribution of the test is established. Simple ad hoc corrective adjustments suggested for enhancing the small sample distributional performance make the test usable even for small group sizes. Simulation studies demonstrate that the test exhibits uniformly well-concentrated sizes at the desired level and the best power, particularly under very small size groups, highly scattered group variances and/or a large number of groups under one-way and two-way ANOVA where precisely a better option is needed. Being closely competent to other peers in all other cases, it offers an universally implementable and trustworthy option in this scenario. The method is straightway extendable to multi-factor setup and has direct connection to ANOVA under log-normally distributed data. Results are illustrated with real data.
- Published
- 2021
- Full Text
- View/download PDF
35. To keep faith with homoskedasticity or to go back to heteroskedasticity? The case of FATANG stocks
- Author
-
José Dias Curto
- Subjects
Original Paper ,Heteroscedasticity ,Applied Mathematics ,Mechanical Engineering ,Autoregressive conditional heteroskedasticity ,US stock markets ,Aerospace Engineering ,Ocean Engineering ,Half-life ,01 natural sciences ,Stock market index ,Persistence ,Control and Systems Engineering ,Volatility ,Homoscedasticity ,0103 physical sciences ,Kurtosis ,Economics ,Econometrics ,Statistical dispersion ,Electrical and Electronic Engineering ,Volatility (finance) ,Semi-kurtosis ,010301 acoustics ,Stock (geology) - Abstract
Did the pattern of US stock market volatility change due to COVID-19 or have the US stock markets been less volatile despite the pandemic shock? And as for tech stocks, are they even less volatile than the market overall? In this paper, we provide evidence in favor of a “quietness” in the stock markets, interrupted by COVID-19, by analyzing dispersion, skewness and kurtosis characteristics of the empirical distribution of nine returns series that include individual FATANG stocks (FAANG: Facebook, Amazon, Apple, Netflix and Google; plus Tesla) and US indices (S&P 500, DJIA and NASDAQ). In comparison with the years before, the daily average return after COVID-19 was 6.48, 2.58 and 2.34 times higher for Tesla, Apple and NASDAQ, respectively. In terms of volatility, the increase was more pronounced in the three stock indices when compared to the individual FATANG stocks. This paper also puts forward a new methodology based on semi-variance and semi-kurtosis. While the value of the ratio between semi-kurtosis and kurtosis is always higher than 70% for the three US stock indices, in the case of stocks the opposite is true, which highlights the importance of large positive returns when compared to negative ones. Structural breaks and conditional heteroskedasticity are also analyzed by considering the traditional symmetrical and asymmetrical GARCH models. We show that in the most recent past, despite the COVID-19 pandemic, the FATANG tech stocks are characterized mostly by conditional homoskedasticity, while the returns of US stock indices are characterized mainly by conditional heteroskedasticity.
- Published
- 2021
- Full Text
- View/download PDF
36. A general Bayesian model for heteroskedastic data with fully conjugate full-conditional distributions
- Author
-
Scott H. Holan, Paul A. Parker, and Skye Wills
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Mixed model ,Heteroscedasticity ,0211 other engineering and technologies ,02 engineering and technology ,Bayesian inference ,Statistics - Computation ,01 natural sciences ,Methodology (stat.ME) ,010104 statistics & probability ,symbols.namesake ,Econometrics ,0101 mathematics ,Computation (stat.CO) ,Statistics - Methodology ,Mathematics ,Variance function ,021103 operations research ,Series (mathematics) ,Applied Mathematics ,Conditional probability distribution ,Modeling and Simulation ,symbols ,Statistics, Probability and Uncertainty ,Environmental statistics ,Gibbs sampling - Abstract
Models for heteroskedastic data are relevant in a wide variety of applications ranging from financial time series to environmental statistics. However, the topic of modeling the variance function conditionally has not seen near as much attention as modeling the mean. Volatility models have been used in specific applications, but these models can be difficult to fit in a Bayesian setting due to posterior distributions that are challenging to sample from efficiently. In this work, we introduce a general model for heteroskedastic data. This approach models the conditional variance in a mixed model approach as a function of any desired covariates or random effects. We rely on new distribution theory in order to construct priors that yield fully conjugate full conditional distributions. Thus, our approach can easily be fit via Gibbs sampling. Furthermore, we extend the model to a deep learning approach that can provide highly accurate estimates for time dependent data. We also provide an extension for heavy-tailed data. We illustrate our methodology via three applications. The first application utilizes a high dimensional soil dataset with inherent spatial dependence. The second application involves modeling of asset volatility. The third application focuses on clinical trial data for creatinine.
- Published
- 2021
- Full Text
- View/download PDF
37. Variable selection and collinearity processing for multivariate data via row-elastic-net regularization
- Author
-
Wenjuan Zhai, Lingchen Kong, and Bingzhen Chen
- Subjects
0106 biological sciences ,Statistics and Probability ,Elastic net regularization ,Economics and Econometrics ,Multivariate statistics ,Heteroscedasticity ,Computer science ,Applied Mathematics ,Feature selection ,Regression analysis ,Collinearity ,010603 evolutionary biology ,01 natural sciences ,010104 statistics & probability ,Modeling and Simulation ,Bayesian multivariate linear regression ,Outlier ,0101 mathematics ,Algorithm ,Social Sciences (miscellaneous) ,Analysis - Abstract
Multivariate data is collected in many fields, such as chemometrics, econometrics, financial engineering and genetics. In multivariate data, heteroscedasticity and collinearity occur frequently. And selecting material predictors is also a key issue when analyzing multivariate data. To accomplish these tasks, multivariate linear regression model is often constructed. We thus propose row-sparse elastic-net regularized multivariate Huber regression model in this paper. For this new model, we proof its grouping effect property and the property of resisting sample outliers. Based on the KKT condition, an accelerated proximal sub-gradient algorithm is designed to solve the proposed model and its convergency is also established. To demonstrate the accuracy and efficiency, simulation and real data experiments are carried out. The numerical results show that the new model can deal with heteroscedasticity and collinearity well.
- Published
- 2021
- Full Text
- View/download PDF
38. Semiparametric estimation for average causal effects using propensity score-based spline
- Author
-
Xinyi Xu, Qing Jiang, Xingwei Tong, Bo Lu, and Peng Wu
- Subjects
Statistics and Probability ,Heteroscedasticity ,education.field_of_study ,Applied Mathematics ,05 social sciences ,Population ,Estimator ,01 natural sciences ,Weighting ,010104 statistics & probability ,Delta method ,0502 economics and business ,Covariate ,Propensity score matching ,Econometrics ,Observational study ,0101 mathematics ,Statistics, Probability and Uncertainty ,education ,050205 econometrics ,Mathematics - Abstract
When estimating the average causal effect in observational studies, researchers have to tackle both self-selection of treatment and outcome modeling. This is difficult because the parametric form of the outcome model is often unknown and there exists a large number of covariates. In this work, we present a semiparametric strategy for estimating the average causal effect by regressing on the propensity score. Furthermore, we show that regression error terms usually depend on the propensity score as well, which could cause heteroscedastic variances, and thus construct a refined estimator to improve the estimation efficiency. Both estimators are shown to be consistent and asymptotically normally distributed, with the latter one having a smaller asymptotic variance. The simulation studies indicate that our methods compare favorably with many competing estimators. Our methods are easy to implement and avoid hazardous impact due to extreme weights as often seen in weighting estimators. They can also be extended to handle subgroup effects with known structure. We apply the proposed methods to data from the Ohio Medicaid Assessment Survey 2012, estimating the effect of having health insurance on self-reported health status for a population with subsidized insurance plan choices under the Affordable Care Act.
- Published
- 2021
- Full Text
- View/download PDF
39. Adaptive-weighted estimation of semi-varying coefficient models with heteroscedastic errors
- Author
-
Yong Zhou and Yuze Yuan
- Subjects
Statistics and Probability ,Estimation ,Statistics::Theory ,Heteroscedasticity ,021103 operations research ,Statistics::Applications ,Applied Mathematics ,0211 other engineering and technologies ,Nonparametric statistics ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,Modeling and Simulation ,Statistics::Methodology ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Parametric statistics ,Mathematics - Abstract
An adaptive-weighted estimation procedure for parametric and nonparametric coefficients in semi-varying coefficient models with heteroscedastic errors is considered in this paper. Firstly, we prese...
- Published
- 2021
- Full Text
- View/download PDF
40. Squaring Things Up with R2: What It Is and What It Can (and Cannot) Tell You
- Author
-
Kevin Chalifoux, Félix Camirand Lemyre, Pascal Mireault, and Brigitte Desharnais
- Subjects
Heteroscedasticity ,Chemical Health and Safety ,Coefficient of determination ,Calibration (statistics) ,Calibration curve ,Health, Toxicology and Mutagenesis ,010401 analytical chemistry ,Linearity ,Context (language use) ,Regression analysis ,Toxicology ,01 natural sciences ,0104 chemical sciences ,Analytical Chemistry ,03 medical and health sciences ,0302 clinical medicine ,Calibration ,Linear regression ,Linear Models ,Environmental Chemistry ,Applied mathematics ,030216 legal & forensic medicine ,Mathematics - Abstract
The coefficient of correlation (r) and the coefficient of determination (R2 or r2) have long been used in analytical chemistry, bioanalysis and forensic toxicology as figures demonstrating linearity of the calibration data in method validation. We clarify here what these two figures are and why they should not be used for this purpose in the context of model fitting for prediction. R2 evaluates whether the data are better explained by the regression model used than by no model at all (i.e., a flat line of slope = 0 and intercept $\bar y$), and to what degree. Hopefully, in the context of calibration curves, the fact that a linear regression better explains the data than no model at all should not be a point of contention. Upon closer examination, a series of restrictions appear in the interpretation of these coefficients. They cannot indicate whether the dataset at hand is linear or not, because they assume that the regression model used is an adequate model for the data. For the same reason, they cannot disprove the existence of another functional relationship in the data. By definition, they are influenced by the variability of the data. The slope of the calibration curve will also change their value. Finally, when heteroscedastic data are analyzed, the coefficients will be influenced by calibration levels spacing within the dynamic range, unless a weighted version of the equations is used. With these considerations in mind, we suggest to stop using r and R2 as figures of merit to demonstrate linearity of calibration curves in method validations. Of course, this does not preclude their use in other contexts. Alternative paths for evaluation of linearity and calibration model validity are summarily presented.
- Published
- 2021
- Full Text
- View/download PDF
41. Heteroscedastic additive models - Estimating the fixed effects and covariance matrix parameters
- Author
-
Miguel Fonseca and Adilson Da Silva
- Subjects
Statistics and Probability ,Kronecker product ,Heteroscedasticity ,Algebra and Number Theory ,Covariance matrix ,symbols.namesake ,orthogonal matrix,Kronecker product,additive design,heteroscedasticity ,symbols ,İstatistik ve Olasılık ,Applied mathematics ,Geometry and Topology ,Orthogonal matrix ,Additive model ,Analysis ,Mathematics - Abstract
This work aims to deduce estimators for the unknown parameters of fixed effects and covariance matrix structure in heteroscedastic additive design. In order to do that, the design will be projected onto the orthogonal complement of the subspace spanned by columns of design matrix for the fixed effects, and the Kronecker product will be used to produced unbiased estimators for the parameters of covariance matrix, and then such estimators used to produce an estimator for the fixed effect vector. Moreover, the coefficient of determination for both fixed effects and covariance structure will be derived. A simulation study will be conducted, and a numerical example will be explored.
- Published
- 2021
- Full Text
- View/download PDF
42. Closed-form random utility models with mixture distributions of random utilities: Exploring finite mixtures of qGEV models
- Author
-
Fiore Tinessa and Tinessa, Fiore
- Subjects
050210 logistics & transportation ,Heteroscedasticity ,05 social sciences ,Multiplicative function ,Transportation ,010501 environmental sciences ,Management Science and Operations Research ,Covariance ,01 natural sciences ,Goodness of fit ,Homoscedasticity ,0502 economics and business ,Embedding ,Applied mathematics ,Marginal distribution ,0105 earth and related environmental sciences ,Civil and Structural Engineering ,Mathematics ,Weibull distribution - Abstract
This paper investigates the class of random utility models (RUM) derived under the assumption of random utilities or disutilities following mixture distributions. We introduce a general framework embedding the models of Mattsson et al. (2014) and the q-product GEV model of Chikaraishi and Nakayama, (2016) while extending the investigations of Papola (2016). New closed-form models are obtained, with mixtures of covariance matrices, mathematical forms of utility (additive, multiplicative or in-between), variances of utilities (heteroscedasticity) and marginal distributions. The models are compared in two cross-validation exercises, based on a real dataset of travel mode preferences, outperforming existing heteroscedastic and homoscedastic closed-form models in terms of both in-sample and out-of-sample goodness of fit. The behavioural implications of the models are also discussed.
- Published
- 2021
- Full Text
- View/download PDF
43. Statistical inference for mixture GARCH models with financial application
- Author
-
Maddalena Cavicchioli
- Subjects
Statistics and Probability ,model selection ,Heteroscedasticity ,Autoregressive conditional heteroskedasticity ,Model selection ,Fisher information matrix ,volatility ,Estimator ,Computational Mathematics ,symbols.namesake ,Matrix (mathematics) ,Mixture GARCH models ,Autoregressive model ,symbols ,Statistical inference ,Applied mathematics ,Markov switching models, Mixture GARCH models, Estimation, Fisher information matrix, volatility, model selection ,Statistics, Probability and Uncertainty ,Fisher information ,Markov switching models ,Estimation ,Mathematics - Abstract
In this paper we consider mixture generalized autoregressive conditional heteroskedastic models, and propose a new iteration algorithm of type EM for the estimation of model parameters. The maximum likelihood estimates are shown to be consistent, and their asymptotic properties are investigated. More precisely, we derive simple expressions in closed form for the asymptotic covariance matrix and the expected Fisher information matrix of the ML estimator. Finally, we study the model selection and propose testing procedures. A simulation study and an application to financial real-series illustrate the results.
- Published
- 2021
- Full Text
- View/download PDF
44. Bootstrap for integer‐valued GARCH( p , q ) processes
- Author
-
Michael H. Neumann
- Subjects
Statistics and Probability ,Heteroscedasticity ,Stationary distribution ,Markov kernel ,Autoregressive conditional heteroskedasticity ,Poisson distribution ,symbols.namesake ,Autoregressive model ,symbols ,Applied mathematics ,Uniqueness ,Statistics, Probability and Uncertainty ,Contraction (operator theory) ,Mathematics - Abstract
We consider integer‐valued processes with a linear or nonlinear generalized autoregressive conditional heteroscedastic models structure, where the count variables given the past follow a Poisson distribution. We show that a contraction condition imposed on the intensity function yields a contraction property of the Markov kernel of the process. This allows almost effortless proofs of the existence and uniqueness of a stationary distribution as well as of absolute regularity of the count process. As our main result, we construct a coupling of the original process and a model‐based bootstrap counterpart. Using a contraction property of the Markov kernel of the coupled process we obtain bootstrap consistency for different types of statistics.
- Published
- 2021
- Full Text
- View/download PDF
45. Empirical likelihood inference for threshold autoregressive conditional heteroscedasticity model
- Author
-
Cui-Xin Peng and Zhi-Wen Zhao
- Subjects
Heteroscedasticity ,Estimation theory ,Threshold autoregressive model ,Applied Mathematics ,lcsh:Mathematics ,Coverage probability ,Estimating equations ,Empirical likelihood ,lcsh:QA1-939 ,Autoregressive model ,Conditional heteroscedasticity ,Confidence region ,Statistics ,Discrete Mathematics and Combinatorics ,Statistics::Methodology ,Least squares method ,Analysis ,Statistic ,Mathematics - Abstract
This paper considers the parameter estimation problem of a first-order threshold autoregressive conditional heteroscedasticity model by using the empirical likelihood method. We obtain the empirical likelihood ratio statistic based on the estimating equation of the least squares estimation and construct the confidence region for the model parameters. Simulation studies indicate that the empirical likelihood method outperforms the normal approximation-based method in terms of coverage probability.
- Published
- 2021
46. Heteroscedastic Laplace mixture of experts regression models and applications
- Author
-
Liu-cang Wu, Shu-yu Zhang, and Shuang-shuang Li
- Subjects
Statistics::Theory ,Heteroscedasticity ,education.field_of_study ,Laplace transform ,Applied Mathematics ,Population ,Estimator ,Regression analysis ,02 engineering and technology ,01 natural sciences ,Laplace distribution ,Regression ,010104 statistics & probability ,Linear regression ,0202 electrical engineering, electronic engineering, information engineering ,Statistics::Methodology ,Applied mathematics ,020201 artificial intelligence & image processing ,0101 mathematics ,education ,Mathematics - Abstract
Mixture of Experts (MoE) regression models are widely studied in statistics and machine learning for modeling heterogeneity in data for regression, clustering and classification. Laplace distribution is one of the most important statistical tools to analyze thick and tail data. Laplace Mixture of Linear Experts (LMoLE) regression models are based on the Laplace distribution which is more robust. Similar to modelling variance parameter in a homogeneous population, we propose and study a new novel class of models: heteroscedastic Laplace mixture of experts regression models to analyze the heteroscedastic data coming from a heterogeneous population in this paper. The issues of maximum likelihood estimation are addressed. In particular, Minorization-Maximization (MM) algorithm for estimating the regression parameters is developed. Properties of the estimators of the regression coefficients are evaluated through Monte Carlo simulations. Results from the analysis of two real data sets are presented.
- Published
- 2021
- Full Text
- View/download PDF
47. Hybrid Reweighted Optimization Method for Gridless Direction of Arrival Estimation in Heteroscedastic Noise Environment
- Author
-
O. Al-Dakkak, Khaldoun Khorzom, and Lama Zien Alabideen
- Subjects
Noise ,Heteroscedasticity ,Computer science ,Applied Mathematics ,Modeling and Simulation ,Direction of arrival ,Engineering (miscellaneous) ,Algorithm - Abstract
In this paper, we present a hybrid optimization framework for gridless sparse Direction of Arrival (DoA) estimation under the consideration of heteroscedastic noise scenarios. The key idea of the proposed framework is to combine global and local minima search techniques that offer a sparser optimizer with boosted immunity to noise variation. In particular, we enforce sparsity by means of reformulating the Atomic Norm Minimization (ANM) problem through applying the nonconvex Schatten-p quasi-norm (0
- Published
- 2021
- Full Text
- View/download PDF
48. Weighted bias-corrected restricted statistical inference for heteroscedastic semiparametric varying-coefficient errors-in-variables model
- Author
-
Gaorong Li and Weiwei Zhang
- Subjects
Statistics and Probability ,Heteroscedasticity ,05 social sciences ,Linear model ,Nonparametric statistics ,Estimator ,01 natural sciences ,010104 statistics & probability ,0502 economics and business ,Statistical inference ,Applied mathematics ,Errors-in-variables models ,0101 mathematics ,050205 econometrics ,Parametric statistics ,Mathematics ,Variance function - Abstract
In this paper, we consider statistical inference for a heteroscedastic semiparametric varying-coefficient partially linear model with measurement errors in the nonparametric component when exact linear restriction on the parametric component is assumed to hold. Two types of weighted bias-corrected restricted estimators of the parametric and nonparametric components are proposed based on a bias-corrected estimator of the variance function, which is proposed by the nonparametric kernel estimation. The asymptotic properties of the resulting estimators are established under some regularity conditions. Moreover, we further proposed a weighted bias-corrected profile Lagrange multiplier test statistic to check whether the linear restriction of the model is valid. Finally, some simulation studies and a real data example are conducted to assess the performance of our proposed estimators and the testing procedure in finite samples.
- Published
- 2021
- Full Text
- View/download PDF
49. Semiparametric Smoothing Spline in Joint Mean and Dispersion Models with Responses from the Biparametric Exponential Family: A Bayesian Perspective
- Author
-
Edilberto Cepeda and Héctor Zárate
- Subjects
Statistics and Probability ,Heteroscedasticity ,Control and Optimization ,Markov chain ,Computer science ,Bayesian probability ,Markov chain Monte Carlo ,Smoothing spline ,symbols.namesake ,Exponential family ,Artificial Intelligence ,Signal Processing ,Parametric model ,symbols ,Applied mathematics ,Computer Vision and Pattern Recognition ,Statistics, Probability and Uncertainty ,Smoothing ,Information Systems - Abstract
This article extends the fusion among various statistical methods to estimate the mean and variance functions in heteroscedastic semiparametric models when the response variable comes from a two-parameter exponential family distribution. We rely on the natural connection among smoothing methods that use basis functions with penalization, mixed models and a Bayesian Markov Chain sampling simulation methodology. The significance and implications of our strategy lies in its potential to contribute to a simple and unified computational methodology that takes into account the factors that affect the variability in the responses, which in turn is important for an efficient estimation and correct inference of mean parameters without the requirement of fully parametric models. An extensive simulation study investigates the performance of the estimates. Finally, an application using the Light Detection and Ranging technique, LIDAR, data highlights the merits of our approach.
- Published
- 2021
- Full Text
- View/download PDF
50. An econometric approach to the estimation of multi-level models
- Author
-
Yimin Yang and Peter Schmidt
- Subjects
Estimation ,Economics and Econometrics ,Heteroscedasticity ,Computer science ,Applied Mathematics ,05 social sciences ,Multilevel model ,01 natural sciences ,Regression ,Hierarchical database model ,010104 statistics & probability ,Variable (computer science) ,0502 economics and business ,Econometrics ,Endogeneity ,0101 mathematics ,050205 econometrics ,Panel data - Abstract
In this paper we consider “multidimensional” or “hierarchical” or “multilevel” models that are popular in the educational and economics literatures. Instead of two levels (individuals over time in the standard panel data model), we now have multiple levels (e.g. students in classrooms in schools in districts). We apply standard methods of analysis for econometric panel data to multilevel models. Specifically, we generalize the results of Hausman and Taylor and the subsequent literature to these models. This is a non-trivial extension because we now have more than one kind of time-invariant effect and more than one kind of “between” regression. We discuss estimation by GMM both with and without the assumption of no conditional heteroskedasticity. We also discuss endogeneity and dynamic models, and we generalize the concept of testing the exogeneity assumptions using a variable addition test.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.