35 results on '"Jae Youn Ahn"'
Search Results
2. A copula transformation in multivariate mixed discrete-continuous models.
- Author
-
Jae Youn Ahn, Sebastian Fuchs, and Rosy Oh
- Published
- 2021
- Full Text
- View/download PDF
3. On structural properties of an asymmetric copula family and its statistical implication.
- Author
-
Woojoo Lee, Mijeong Kim 0003, and Jae Youn Ahn
- Published
- 2020
- Full Text
- View/download PDF
4. On Minimal Copulas under the Concordance Order.
- Author
-
Jae Youn Ahn and Sebastian Fuchs
- Published
- 2020
- Full Text
- View/download PDF
5. Construction of multiple decrement tables under generalized fractional age assumptions.
- Author
-
Hangsuck Lee, Jae Youn Ahn, and Bangwon Ko
- Published
- 2019
- Full Text
- View/download PDF
6. A simple Bayesian state-space approach to the collective risk models
- Author
-
Jae Youn Ahn, Himchan Jeong, and Yang Lu
- Subjects
Statistics and Probability ,Economics and Econometrics ,Statistics, Probability and Uncertainty - Published
- 2022
- Full Text
- View/download PDF
7. Multivariate countermonotonicity and the minimal copulas.
- Author
-
Woojoo Lee, Ka Chun Cheung, and Jae Youn Ahn
- Published
- 2017
- Full Text
- View/download PDF
8. DR-LSTM: Dimension reduction based deep learning approach to predict stock price.
- Author
-
Ah-ram Lee, Jae Youn Ahn, Ji Eun Choi, and Kyongwon Kim
- Subjects
DIMENSION reduction (Statistics) ,DEEP learning ,STOCK prices - Abstract
In recent decades, increasing research attention has been directed toward predicting the price of stocks in financial markets using deep learning methods. For instance, recurrent neural network (RNN) is known to be competitive for datasets with time-series data. Long short term memory (LSTM) further improves RNN by providing an alternative approach to the gradient loss problem. LSTM has its own advantage in predictive accuracy by retaining memory for a longer time. In this paper, we combine both supervised and unsupervised dimension reduction methods with LSTM to enhance the forecasting performance and refer to this as a dimension reduction based LSTM (DR-LSTM) approach. For a supervised dimension reduction method, we use methods such as sliced inverse regression (SIR), sparse SIR, and kernel SIR. Furthermore, principal component analysis (PCA), sparse PCA, and kernel PCA are used as unsupervised dimension reduction methods. Using datasets of real stock market index (S&P 500, STOXX Europe 600, and KOSPI), we present a comparative study on predictive accuracy between six DR-LSTM methods and time series modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. On high-dimensional two sample mean testing statistics: a comparative study with a data adaptive choice of coefficient vector.
- Author
-
Soeun Kim, Jae Youn Ahn, and Woojoo Lee
- Published
- 2016
- Full Text
- View/download PDF
10. Negative dependence concept in copulas and the marginal free herd behavior index.
- Author
-
Jae Youn Ahn
- Published
- 2015
- Full Text
- View/download PDF
11. On the ordering of credibility factors
- Author
-
Yang Lu, Himchan Jeong, and Jae Youn Ahn
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Economics and Econometrics ,Autocorrelation ,Structure (category theory) ,010103 numerical & computational mathematics ,Extension (predicate logic) ,Covariance ,Random effects model ,Statistics - Applications ,01 natural sciences ,010104 statistics & probability ,Credibility ,Econometrics ,Applications (stat.AP) ,Relevance (information retrieval) ,0101 mathematics ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
Traditional credibility analysis of risks in insurance is based on the random effects model, where the heterogeneity across the policyholders is assumed to be time-invariant. One popular extension is the dynamic random effects (or state-space) model. However, while the latter allows for time-varying heterogeneity, its application to the credibility analysis should be conducted with care due to the possibility of negative credibilities per period [see Pinquet (2020a) ]. Another important but under-explored topic is the ordering of the credibility factors in a monotonous manner—recent claims ought to have larger weights than the old ones. This paper shows that the ordering of the covariance structure of the random effects in the dynamic random effects model does not necessarily imply that of the credibility factors. Subsequently, we show that the state-space model, with AR(1)-type autocorrelation function, guarantees the ordering of the credibility factors. Simulation experiments and a case study with a real dataset are conducted to show the relevance in insurance applications.
- Published
- 2021
- Full Text
- View/download PDF
12. A multi-year microlevel collective risk model
- Author
-
Rosy Oh, Himchan Jeong, Jae Youn Ahn, and Emiliano A. Valdez
- Subjects
Statistics and Probability ,Economics and Econometrics ,Statistical assumption ,Process (engineering) ,Computer science ,Aggregate (data warehouse) ,Copula (linguistics) ,Econometrics ,Portfolio ,Extension (predicate logic) ,Statistics, Probability and Uncertainty ,Random effects model ,Focus (optics) - Abstract
For a typical insurance portfolio, the claims process for a short period, typically one year, is characterized by observing frequency of claims together with the associated claims severities. The collective risk model describes this portfolio as a random sum of the aggregation of the claim amounts. In the classical framework, for simplicity, the claim frequency and claim severities are assumed to be mutually independent. However, there is a growing interest in relaxing this independence assumption which is more realistic and useful for the practical insurance ratemaking. While the common thread has been capturing the dependence between frequency and aggregate severity within a single period, the work of Oh et al. (2021) provides an interesting extension to the addition of capturing dependence among individual severities. In this paper, we extend these works within a framework where we have a portfolio of microlevel frequencies and severities for multiple years. This allows us to develop a factor copula model framework that captures various types of dependence between claim frequencies and claim severities over multiple years. It is therefore a clear extension of earlier works on one-year dependent frequency-severity models and on random effects model for capturing serial dependence of claims. We focus on the results using a family of elliptical copulas to model the dependence. The paper further describes how to calibrate the proposed model using illustrative claims data arising from a Singapore insurance company. The estimated results provide strong evidences of all forms of dependencies captured by our model.
- Published
- 2021
- Full Text
- View/download PDF
13. Predictive analysis in insurance: An application of generalized linear mixed models.
- Author
-
Rosy Oh, Nayoung Woo, Jae Keun Yoo, and Jae Youn Ahn
- Subjects
LINEAR models (Communication) ,INSURANCE companies - Abstract
Generalized linear models and generalized linear mixed models (GLMMs) are fundamental tools for predictive analyses. In insurance, GLMMs are particularly important, because they provide not only a tool for prediction but also a theoretical justification for setting premiums. Although thousands of resources are available for introducing GLMMs as a classical and fundamental tool in statistical analysis, few resources seem to be available for the insurance industry. This study targets insurance professionals already familiar with basic actuarial mathematics and explains GLMMs and their linkage with classical actuarial pricing tools, such as the B¨uhlmann premium method. Focus of the study is mainly on the modeling aspect of GLMMs and their application to pricing, while avoiding technical issues related to statistical estimation, which can be automatically handled by most statistical software. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Predictive risk analysis using a collective risk model: Choosing between past frequency and aggregate severity information
- Author
-
Youngju Lee, Rosy Oh, Dan Zhu, and Jae Youn Ahn
- Subjects
Statistics and Probability ,Risk analysis ,Economics and Econometrics ,050208 finance ,Computer science ,05 social sciences ,Aggregate (data warehouse) ,Context (language use) ,01 natural sciences ,Field (computer science) ,010104 statistics & probability ,Risk model ,Efficiency ,0502 economics and business ,Econometrics ,A priori and a posteriori ,0101 mathematics ,Statistics, Probability and Uncertainty ,Risk classification - Abstract
The typical risk classification procedure in the insurance field consists of a priori risk classification based on observable risk characteristics and a posteriori risk classification where premiums are adjusted to reflect claim histories. While using the full claim history data is optimal in a posteriori risk classification, some insurance sectors only use partial information to determine the appropriate premium to charge. Examples include auto insurance premiums being calculated based on past claim frequencies, and aggregate severities used to decide workers’ compensation. The motivation is to have a simplified and efficient a posteriori risk classification procedure, customized to the context involved. This study compares the relative efficiency of the two simplified a posteriori risk classifications, that is, those based on frequency and severity. It provides a mathematical framework to assist practitioners in choosing the most appropriate practice.
- Published
- 2021
- Full Text
- View/download PDF
15. Double-counting problem of the bonus–malus system
- Author
-
Jae Youn Ahn, Kyung Suk Lee, Rosy Oh, and Sojung Carol Park
- Subjects
Statistics and Probability ,Economics and Econometrics ,Mathematical optimization ,050208 finance ,Computer science ,05 social sciences ,Mechanism based ,Risk adjustment ,01 natural sciences ,010104 statistics & probability ,Double counting (accounting) ,0502 economics and business ,Bonus-malus ,A priori and a posteriori ,0101 mathematics ,Statistics, Probability and Uncertainty ,Inefficiency - Abstract
The bonus–malus system (BMS) is a widely used premium adjustment mechanism based on policyholder’s claim history. Most auto insurance BMSs assume that policyholders in the same bonus–malus (BM) level share the same a posteriori risk adjustment. This system reflects the policyholder’s claim history in a relatively simple manner. However, the typical system follows a single BM scale and is known to suffer from the double-counting problem: policyholders in the high-risk classes in terms of a priori characteristics are penalized too severely (Taylor, 1997; Pitrebois et al., 2003). Thus, Pitrebois et al. (2003) proposed a new system with multiple BM scales based on the a priori characteristics. While this multiple-scale BMS removes the double-counting problem, it loses the prime benefit of simplicity. Alternatively, we argue that the double-counting problem can be viewed as an inefficiency of the optimization process. Furthermore, we show that the double-counting problem can be resolved by fully optimizing the BMS setting, but retaining the traditional BMS format.
- Published
- 2020
- Full Text
- View/download PDF
16. On copula-based collective risk models: from elliptical copulas to vine copulas
- Author
-
Jae Youn Ahn, Rosy Oh, and Woojoo Lee
- Subjects
Statistics and Probability ,Economics and Econometrics ,050208 finance ,musculoskeletal, neural, and ocular physiology ,05 social sciences ,macromolecular substances ,01 natural sciences ,Copula (probability theory) ,Vine copula ,010104 statistics & probability ,nervous system ,0502 economics and business ,Econometrics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
Several collective risk models have recently been proposed by relaxing the widely used but controversial assumption of independence between claim frequency and severity. Approaches include the biva...
- Published
- 2020
- Full Text
- View/download PDF
17. The Poisson random effect model for experience ratemaking: Limitations and alternative solutions
- Author
-
Woojoo Lee, Jae Youn Ahn, and Jeong Hwan Kim
- Subjects
Statistics and Probability ,Score test ,Mixed model ,0303 health sciences ,Economics and Econometrics ,Negative binomial distribution ,Poisson distribution ,Random effects model ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,symbols.namesake ,Overdispersion ,Statistics ,symbols ,Gamma distribution ,0101 mathematics ,Statistics, Probability and Uncertainty ,Marginal distribution ,030304 developmental biology ,Mathematics - Abstract
Poisson random effect models with a shared random effect have been widely used in actuarial science for analyzing the number of claims. In particular, the random effect is a key factor in a posteriori risk classification. However, the necessity of the random effect may not be properly assessed due to the dual role of the random effect; it affects both the marginal distribution of the number of claims and the dependence among the numbers of claims obtained from an individual over time. We first show that the score test for the nullity of the variance of the shared random effect can falsely indicate significant dependence among the numbers of claims even though they are independent. To mitigate this problem, we propose to separate the dual role of the random effect by introducing additional random effects to capture the overdispersion part, which are called saturated random effects. In order to circumvent heavy computational issues by the saturated random effects, we choose a gamma distribution for the saturated random effects because it gives the closed form of marginal distribution. In fact, this choice leads to the negative binomial random effect model that has been widely used for the analysis of frequency data. We show that safer conclusions about the a posteriori risk classification can be made based on the negative binomial mixed model under various situations. We also derive the score test as a sufficient condition for the existence of the a posteriori risk classification based on the proposed model.
- Published
- 2020
- Full Text
- View/download PDF
18. A copula transformation in multivariate mixed discrete-continuous models
- Author
-
Sebastian Fuchs, Rosy Oh, and Jae Youn Ahn
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Multivariate statistics ,Computational complexity theory ,Logic ,Copula (linguistics) ,Probability density function ,02 engineering and technology ,Conditional probability distribution ,Statistics::Other Statistics ,Statistics - Applications ,Methodology (stat.ME) ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Applications (stat.AP) ,020201 artificial intelligence & image processing ,Marginal distribution ,Statistics - Methodology ,Mathematics - Abstract
Copulas allow a flexible and simultaneous modeling of complicated dependence structures together with various marginal distributions. Especially if the density function can be represented as the product of the marginal density functions and the copula density function, this leads to both an intuitive interpretation of the conditional distribution and convenient estimation procedures. However, this is no longer the case for copula models with mixed discrete and continuous marginal distributions, because the corresponding density function cannot be decomposed so nicely. In this paper, we introduce a copula transformation method that allows to represent the density function of a distribution with mixed discrete and continuous marginals as the product of the marginal probability mass/density functions and the copula density function. With the proposed method, conditional distributions can be described analytically and the computational complexity in the estimation procedure can be reduced depending on the type of copula used.
- Published
- 2020
19. Investigating dependence between frequency and severity via simple generalized linear models
- Author
-
Woojoo Lee, Jae Youn Ahn, and Sojung Carol Park
- Subjects
Statistics and Probability ,Generalized linear model ,05 social sciences ,Rate making ,Estimator ,Regression analysis ,macromolecular substances ,Bayesian inference ,01 natural sciences ,010104 statistics & probability ,0502 economics and business ,Covariate ,Econometrics ,Statistical dispersion ,0101 mathematics ,Independence (probability theory) ,050205 econometrics ,Mathematics - Abstract
Recently, a body of literature proposed new models relaxing a widely-used but controversial assumption of independence between claim frequency and severity in non-life insurance rate making. This paper critically reviews a generalized linear model approach, where a dependence between claim frequency and severity is introduced by treating frequency as a covariate in a regression model for severity. As an extension of this approach, we propose a dispersion model for severity. For this model, the information loss caused by using average severity rather than individual severity is examined in detail and the parameter estimators suffering from low efficiency are identified. We also provide analytical solutions for the aggregate sum to help rate making. We show that the simple functional form used in current research may not properly reflect the real underlying dependence structure. A real data analysis is given to explain our analytical findings.
- Published
- 2019
- Full Text
- View/download PDF
20. Does hunger for bonuses drive the dependence between claim frequency and severity?
- Author
-
Joseph H.T. Kim, Sojung Carol Park, and Jae Youn Ahn
- Subjects
Statistics and Probability ,Economics and Econometrics ,050208 finance ,05 social sciences ,Sample (statistics) ,Random effects model ,01 natural sciences ,Dependency structure ,Insurance claims ,010104 statistics & probability ,0502 economics and business ,Econometrics ,Economics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Independence (probability theory) - Abstract
Auto ratemaking models have traditionally assumed independence between claim frequency and severity. With the development of insurance claim models that can accommodate dependence between claim frequency and severity, a series of recent studies has revealed that the aforementioned dependence between frequency and severity exists for auto insurance claims, demonstrating the validity of such models. However, the underlying process that creates this dependence has received little attention in the literature. Thus, we show that a rational decision-making process of drivers known as bonus hunger can systemically induce dependence between the claim frequency and severity even when the ground-up loss frequency and severity are, in fact, independent. Our model, based on the random effect model coupled with the standard bonus–malus system, successfully explains the seemingly contradictory results from the existing literature of weak positive dependence, between the claim frequency and severity for liability claims, and moderately negative dependence for collision claims. Our findings show that the seemingly contradicting dependence structures reported in the literature may be neither accidental nor sample specific. Furthermore, the bonus-hunger process also implies that the level of the claim frequency-severity dependence varies across bonus–malus classes, suggesting that a uniform dependency structure may not be appropriate for auto ratemaking modeling.
- Published
- 2018
- Full Text
- View/download PDF
21. A case study for intercontinental comparison of herd behavior in global stock markets
- Author
-
Woojoo Lee, Jae Youn Ahn, Yang Ho Choi, and Changki Kim
- Subjects
Statistics and Probability ,Financial economics ,Applied Mathematics ,Comonotonicity ,05 social sciences ,Financial market ,01 natural sciences ,Popularity ,010104 statistics & probability ,Modeling and Simulation ,0502 economics and business ,Herd ,Stock market ,050207 economics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Herd behavior ,Finance ,Stock (geology) - Abstract
Measuring market fear is an important way of understanding fundamental economic phenomena related to financial crises. There have been several approaches to measure market fear or panic level in a financial market. Recently, herd behavior has gained its popularity as important economic phenomena explaining the fear in the financial market. In this paper, we investigate herd behavior in global stock markets with a focus on intercontinental comparison. While various risk measures are available for the detection of herd behavior in the market, we use the standardized herd behavior index in Dhaene et al. (Insurance: Mathematics and Economics, 50, 357-370, 2012b) and Lee and Ahn (Dependence Modeling, 5, 316-329, 2017) for the comparison of herd behaviors in global stock markets. A global stock market data from Morgan Stanley Capital International is used to study herd behavior especially during periods of financial crises.
- Published
- 2018
- Full Text
- View/download PDF
22. Designing a Bonus-Malus system reflecting the claim size under the dependent frequency-severity model
- Author
-
Joseph H.T. Kim, Jae Youn Ahn, and Rosy Oh
- Subjects
Statistics and Probability ,Generalized linear model ,FOS: Computer and information sciences ,050208 finance ,Selection rule ,Computer science ,05 social sciences ,Rate making ,Bivariate analysis ,Management Science and Operations Research ,Random effects model ,01 natural sciences ,Statistics - Applications ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,0502 economics and business ,Bonus-malus ,Econometrics ,A priori and a posteriori ,Applications (stat.AP) ,0101 mathematics ,Statistics, Probability and Uncertainty ,Set (psychology) - Abstract
In the auto insurance industry, a Bonus-Malus System (BMS) is commonly used as a posteriori risk classification mechanism to set the premium for the next contract period based on a policyholder's claim history. Even though the recent literature reports evidence of a significant dependence between frequency and severity, the current BMS practice is to use a frequency-based transition rule while ignoring severity information. Although Ohet al.[(2020). Bonus-Malus premiums under the dependent frequency-severity modeling.Scandinavian Actuarial Journal2020(3): 172–195] claimed that the frequency-driven BMS transition rule can accommodate the dependence between frequency and severity, their proposal is only a partial solution, as the transition rule still completely ignores the claim severity and is unable to penalize large claims. In this study, we propose to use the BMS with a transition rule based on both frequency and size of claim, based on the bivariate random effect model, which conveniently allows dependence between frequency and severity. We analytically derive the optimal relativities under the proposed BMS framework and show that the proposed BMS outperforms the existing frequency-driven BMS. Later, numerical experiments are also provided using both hypothetical and actual datasets in order to assess the effect of various dependencies on the BMS risk classification and confirm our theoretical findings.
- Published
- 2020
23. Bayesian analysis of multivariate crash counts using copulas
- Author
-
Jae Youn Ahn, Rosy Oh, Eun Sug Park, and Man Suk Oh
- Subjects
Multivariate statistics ,Human Factors and Ergonomics ,symbols.namesake ,Overdispersion ,Joint probability distribution ,0502 economics and business ,Statistics ,Humans ,0501 psychology and cognitive sciences ,Poisson regression ,Safety, Risk, Reliability and Quality ,050107 human factors ,Mathematics ,050210 logistics & transportation ,Models, Statistical ,05 social sciences ,Public Health, Environmental and Occupational Health ,Accidents, Traffic ,Regression analysis ,Bayes Theorem ,Random effects model ,Multivariate Analysis ,symbols ,Marginal distribution ,Safety ,Count data - Abstract
There has been growing interest in jointly modeling correlated multivariate crash counts in road safety research over the past decade. To assess the effects of roadway characteristics or environmental factors on crash counts by severity level or by collision type, various models including multivariate Poisson regression models, multivariate negative binomial regression models, and multivariate Poisson-Lognormal regression models have been suggested. We introduce more general copula-based multivariate count regression models with correlated random effects within a Bayesian framework. Our models incorporate the dependence among the multivariate crash counts by modeling multivariate random effects using copulas. Copulas provide a flexible way to construct valid multivariate distributions by decomposing any joint distribution into a copula and the marginal distributions. Overdispersion as well as general correlation structures including both positive and negative correlations in multivariate crash counts can easily be accounted for by this approach. Our copular-based models can also encompass previously suggested multivariate count regression models including multivariate Poisson-Gamma mixture models and multivariate Poisson-Lognormal regression models. The proposed method is illustrated with crash count data of five different severity levels collected from 451 three-leg unsignalized intersections in California.
- Published
- 2019
24. Extreme value theory in mixture distributions and a statistical method to control the possible bias
- Author
-
Yang Ho Choi, Wonseon Gwak, Jae Youn Ahn, and Hyein Goo
- Subjects
Statistics and Probability ,020209 energy ,Estimator ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,Gumbel distribution ,Rate of convergence ,Pickands–Balkema–de Haan theorem ,Statistics ,0202 electrical engineering, electronic engineering, information engineering ,Generalized extreme value distribution ,Mixture distribution ,Applied mathematics ,0101 mathematics ,Extreme value theory ,Quantile ,Mathematics - Abstract
In this paper, extreme behaviors of a mixture distribution are analyzed. We investigate some cases where the mixture distributions are in the proper domain of attraction so that the extreme value of mixture distributions converges to the proper Generalized Extreme Value distribution (GEV). However, in general, there is no guarantee that the distribution of the data is in the proper maximum domain of attraction. Furthermore, since the convergence rate can be slow even with guaranteed asymptotic convergence, GEV estimation method might provide a biased estimation, as shown in Choi et al. (2014). The paper provides a safe method to control the quality of the quantile estimator in extreme values.
- Published
- 2016
- Full Text
- View/download PDF
25. On multivariate countermonotonic copulas and their actuarial application
- Author
-
Jae Youn Ahn and Bangwon Ko
- Subjects
Multivariate statistics ,050208 finance ,Optimization problem ,General Mathematics ,Comonotonicity ,05 social sciences ,Copula (linguistics) ,01 natural sciences ,Upper and lower bounds ,010104 statistics & probability ,0502 economics and business ,Econometrics ,Risk pool ,0101 mathematics ,Mathematics - Abstract
Extreme positive dependence, also known as comonotonicity or the Frechet upper bound, has been an important concept in insurance since it describes the most dangerous financial behaviors. However, there is no generally agreed upon definition for extreme negative dependence because the corresponding Frechet lower bound does not exist. To resolve this, a set of copulas under the name of d-Countermonotonicity (d-CTM) has been proposed to define extreme negative dependence rather than a single copula. The set of d-CTM copulas can be quite useful in various optimization problems. In this paper, we investigate various properties of d-CTM with more emphasis on the practical issues of actual optimization problems. As an application to insurance, we explore the effect of risk pooling under extreme dependence by adopting the so-called measure of uncertainty.
- Published
- 2016
- Full Text
- View/download PDF
26. Non-parametric inference of risk measures
- Author
-
Jae Youn Ahn
- Subjects
Statistics ,Econometrics ,Non parametric inference ,Importance sampling ,Mathematics ,Central limit theorem - Published
- 2018
- Full Text
- View/download PDF
27. On Minimal Copulas under the Concordance Order
- Author
-
Sebastian Fuchs and Jae Youn Ahn
- Subjects
Multivariate statistics ,Control and Optimization ,Optimization problem ,Applied Mathematics ,Kendall tau rank correlation coefficient ,Concordance ,Copula (linguistics) ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,010104 statistics & probability ,Theory of computation ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,020201 artificial intelligence & image processing ,Minification ,0101 mathematics ,Mathematics - Abstract
In the present paper, we study extreme negative dependence focussing on the concordance order for copulas. With the absence of a least element for dimensions $$d\ge 3$$d≥3, the set of all minimal elements in the collection of all copulas turns out to be a natural and quite important extreme negative dependence concept. We investigate several sufficient conditions, and we provide a necessary condition for a copula to be minimal. The sufficient conditions are related to the extreme negative dependence concept of d-countermonotonicity and the necessary condition is related to the collection of all copulas minimizing multivariate Kendall’s tau. The concept of minimal copulas has already been proved to be useful in various continuous and concordance order preserving optimization problems including variance minimization and the detection of lower bounds for certain measures of concordance. We substantiate this key role of minimal copulas by showing that every continuous and concordance order preserving functional on copulas is minimized by some minimal copula, and, in the case the continuous functional is even strictly concordance order preserving, it is minimized by minimal copulas only. Applying the above results, we may conclude that every minimizer of Spearman’s rho is also a minimizer of Kendall’s tau.
- Published
- 2018
28. Financial interpretation of herd behavior index and its statistical estimation
- Author
-
Jae Youn Ahn and Woojoo Lee
- Subjects
Statistics and Probability ,Finance ,business.industry ,Comonotonicity ,Bayesian inference ,Confidence interval ,Statistics ,Econometrics ,Herd ,business ,Cluster analysis ,Herd behavior ,Stock (geology) ,Mathematics - Abstract
Herd behavior received increasing attention as the key to understanding financial crises. Recently, Dhaene et al. (2012) proposed the herd behavior index (HIX) to measure the degree of the comonotonic movement of stock prices. Choi et al. (2013) introduced the revised version of HIX (RHIX) and illustrated why RHIX should be preferred to HIX in comparing herd behaviors based on simple toy models, but failed to offer any sufficient justification. The present paper investigates three aspects of RHIX. First, RHIX is explained as a useful tool to compare herd behaviors from different groups. For this, a new alternative representation of RHIX is provided. Second, we investigate the statistical estimation of RHIX. In particular, the realized version of RHIX is calculated using tick-by-tick stock prices and the asymptotics of bootstrap equivalence are provided to estimate the confidence interval of RHIX. Third, to extend application scope of RHIX, RHIX is employed in a clustering analysis.
- Published
- 2015
- Full Text
- View/download PDF
29. Enhanced elevated temperature performance of LiFePO4 modified spinel LiNi0.5Mn1.5O4 cathode
- Author
-
Min Chul Kim, Won Hee Jang, Vanchiappan Aravindan, Yun-Sung Lee, Sol Nip Lee, and Jae Youn Ahn
- Subjects
Olivine ,Materials science ,Mechanical Engineering ,Spinel ,Metals and Alloys ,Mineralogy ,engineering.material ,Cathode ,law.invention ,Chemical engineering ,Mechanics of Materials ,Covalent bond ,law ,Phase (matter) ,Materials Chemistry ,engineering ,Surface modification ,Thermal stability ,Sol-gel - Abstract
A dramatic improvement in the elevated temperature performance of spinel phase LiNi 0.5 Mn 1.5 O 4 cathode is achieved by surface modification with LiFePO 4 . Presence of strong covalent P–O bond in the olivine phase translates necessary thermal stability to LiNi 0.5 Mn 1.5 O 4 cathode and renders ∼89% of initial reversible capacity after 50 cycles at 55 °C. Though the LiFePO 4 modified phase exhibits marginal reduction in the initial reversibility than bare LiNi 0.5 Mn 1.5 O 4 , displayed exceptional capacity retention characteristics irrespective of either ambient or elevated temperature.
- Published
- 2014
- Full Text
- View/download PDF
30. Asymptotic theory for the empirical Haezendonck–Goovaerts risk measure
- Author
-
Nariankadu D. Shyamalkumar and Jae Youn Ahn
- Subjects
Statistics and Probability ,Deviation risk measure ,Dynamic risk measure ,Economics and Econometrics ,Expected shortfall ,Spectral risk measure ,Risk measure ,Coherent risk measure ,Distortion risk measure ,Econometrics ,Statistics, Probability and Uncertainty ,Entropic value at risk ,Mathematics - Abstract
Haezendonck–Goovaerts risk measures is a recently introduced class of risk measures which includes, as its minimal member, the Tail Value-at-Risk (T-VaR)—T-VaR arguably the most popular risk measure in global insurance regulation. In applications often one has to estimate the risk measure given a random sample from an unknown distribution. The distribution could either be truly unknown or could be the distribution of a complex function of economic and idiosyncratic variables with the complexity of the function rendering indeterminable its distribution. Hence statistical procedures for the estimation of Haezendonck–Goovaerts risk measures are a key requirement for their use in practice. A natural estimator of the Haezendonck–Goovaerts risk measure is the Haezendonck–Goovaerts risk measure of the empirical distribution, but its statistical properties have not yet been explored in detail. The main goal of this article is to both establish the strong consistency of this estimator and to derive weak convergence limits for this estimator. We also conduct a simulation study to lend insight into the sample sizes required for these asymptotic limits to take hold.
- Published
- 2014
- Full Text
- View/download PDF
31. On the Application of Multivariate Kendall's Tau and Its Interpretation
- Author
-
Jae Youn Ahn and Woojoo Lee
- Subjects
Combinatorics ,Multivariate statistics ,Statistics ,Bivariate analysis ,Upper and lower bounds ,Mathematics - Abstract
We study multivariate extension of Kendall’s tau and its statistical interpretation. There exist various versions of multivariate Kendall’s tau, for example Scarsini (1984), Joe (1990) and Genest et al. (2011); however, few of them mention its lower bounds. For the bivariate case, the Fr´echet-Hoeffding lower bound can achieve the lower bound of Kendall’s tau. However in the multivariate case, the Fr´echet-Hoeffding lower bound itself does not exist as a distribution, which makes the interpretation of Kendall’s tau unclear when it has negative value. In this paper, we explain sufficient conditions to achieve the lower bound of Kendall’s tau and provide real data examples that provide further insights into the interpretation for the lower bounds of Kendall’s tau.
- Published
- 2013
- Full Text
- View/download PDF
32. Generalized Linear Mixed Models for Dependent Compound Risk Models
- Author
-
Jae Youn Ahn, Emiliano A. Valdez, Himchan Jeong, and Sojung Park
- Subjects
Generalized linear model ,Tweedie distribution ,Insurance policy ,Econometrics ,Random effects model ,Statistical evidence ,Generalized linear mixed model ,Independence (probability theory) ,Mathematics - Abstract
In ratemaking, calculation of a pure premium has traditionally been based on modeling frequency and severity in an aggregated claims model. For simplicity, it has been a standard practice to assume the independence of loss frequency and loss severity. In recent years, there is sporadic interest in the actuarial literature exploring models that departs from this independence. In this article, we extend the work of Garrido et al. (2016) which uses generalized linear models (GLMs) that account for dependence between frequency and severity and simultaneously incorporate rating factors to capture policyholder heterogeneity. In addition, we quantify and explain the contribution of the variability of claims among policyholders through the use of random effects using generalized linear mixed models (GLMMs). We calibrated our model using a portfolio of auto insurance contracts from a Singapore insurer where we observed claim counts and amounts from policyholders for a period of six years. We compared our results with the dependent GLM considered by Garrido et al. (2016), Tweedie models, and the case of independence. The dependent GLMM shows statistical evidence of positive dependence between frequency and severity. Using validation procedures, we find that the results demonstrate a more superior model when random effects are considered within a GLMM framework.
- Published
- 2017
- Full Text
- View/download PDF
33. Large Sample Behavior of the CTE and VaR Estimators under Importance Sampling
- Author
-
Jae Youn Ahn and Nariankadu D. Shyamalkumar
- Subjects
Statistics and Probability ,Economics and Econometrics ,Risk measure ,Asymptotic distribution ,Conditional expectation ,Tail value at risk ,Statistics ,Econometrics ,Economics ,Variance reduction ,Statistics, Probability and Uncertainty ,Value at risk ,Importance sampling ,Quantile - Abstract
The α-level value at risk (Var) and the α-level conditional tail expectation (CTE) of a continuous random variable X are defined as its α-level quantile (denoted by qα ) and its conditional expectation given the event {X > qα }, respectively. Var is a popular risk measure in the banking sector, for both external and internal reporting purposes, while the CTE has recently become the risk measure of choice for insurance regulation in North America. Estimation of the CTE for company assets and liabilities is becoming an important actuarial exercise, and the size and complexity of these liabilities make inference procedures with good small sample performance very desirable. A common situation is one in which the CTE of the portfolio loss is estimated using simulated values, and in such situations use of variance reduction techniques such as importance sampling have proved to be fruitful. Construction of confidence intervals for the CTE relies on the availability of the asymptotic distribution of the ...
- Published
- 2011
- Full Text
- View/download PDF
34. An Asymptotic Analysis of the Bootstrap Bias Correction for the Empirical CTE
- Author
-
Jae Youn Ahn and Nariankadu D. Shyamalkumar Asa
- Subjects
Statistics and Probability ,Tail value at risk ,Economics and Econometrics ,Asymptotic analysis ,Bias of an estimator ,Sample size determination ,Statistics ,Order statistic ,Point estimation ,Statistics, Probability and Uncertainty ,Conditional expectation ,Quantile ,Mathematics - Abstract
The a-level Conditional Tail Expectation (CTE) of a continuous random variable X is defined as its conditional expectation given the event {X > qα} where qα represents its α-level quantile. It is well known that the empirical CTE (the average of the n(1 – a) largest order statistics in a sample of size n) is a negatively biased estimator of the CTE. This bias vanishes as the sample size increases but in small samples can be significant. Hence the need for bias correction. Although the bootstrap method has been suggested for correcting the bias of the empirical CTE, recent research shows that alternate kernel-based methods of bias correction perform better in some practical examples. To further understand this phenomenon, we conduct an asymptotic analysis of the exact bootstrap bias correction for the empirical CTE, focusing on its performance as a point estimator of the bias of the empirical CTE. We provide heuristics suggesting that the exact bootstrap bias correction is approximately a kernel-b...
- Published
- 2010
- Full Text
- View/download PDF
35. Analyzing Herd Behavior in Global Stock Markets: An Intercontinental Comparison
- Author
-
Changki Kim, Yangho Choi, Woojoo Lee, and Jae Youn Ahn
- Subjects
FOS: Economics and business ,Statistical Finance (q-fin.ST) ,Risk Management (q-fin.RM) ,Quantitative Finance - Statistical Finance ,Quantitative Finance - Risk Management - Abstract
Herd behavior is an important economic phenomenon, especially in the context of the recent financial crises. In this paper, herd behavior in global stock markets is investigated with a focus on intercontinental comparison. Since most existing herd behavior indices do not provide a comparative method, we propose a new herd behavior index and demonstrate its desirable properties through simple theoretical models. As for empirical analysis, we use global stock market data from Morgan Stanley Capital International to study herd behavior especially during periods of financial crises in detail.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.