34 results on '"Eric V. Slud"'
Search Results
2. Multi-outcome longitudinal small area estimation – a case study
- Author
-
Eric V. Slud and Yves Thibaudeau
- Subjects
Statistics and Probability ,Mixed model ,Estimation ,Small area estimation ,Computational Theory and Mathematics ,Current Population Survey ,Applied Mathematics ,Statistics ,Conditional probability ,Statistics, Probability and Uncertainty ,Analysis ,Outcome (probability) ,Mathematics - Abstract
A recent paper [Thibaudeau, Slud, and Gottschalck (2017). Modeling log-linear conditional probabilities for estimation in surveys. The Annals of Applied Statistics, 11, 680–697] proposed a ‘hybrid’...
- Published
- 2019
- Full Text
- View/download PDF
3. Combining estimators of a common parameter across samples
- Author
-
Abram Kagan, Eric V. Slud, and Ilia Vonta
- Subjects
Statistics and Probability ,Applied Mathematics ,05 social sciences ,Estimator ,Estimating equations ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,Multiple data ,Efficient estimator ,Computational Theory and Mathematics ,0502 economics and business ,Statistics ,symbols ,0101 mathematics ,Statistics, Probability and Uncertainty ,Fisher information ,Analysis ,050205 econometrics ,Mathematics - Abstract
In many settings, multiple data collections and analyses on the same topic are summarised separately through statistical estimators of parameters and variances, and yet there are scientific...
- Published
- 2018
- Full Text
- View/download PDF
4. Adaptive Intervention Methodology for Reduction of Respondent Contact Burden in the American Community Survey
- Author
-
Eric V. Slud, Todd Hughes, and Robert D. Ashmead
- Subjects
medicine.medical_specialty ,contact history instrument ,Statistics ,nonresponse follow-up ,respondent burden ,Sample (statistics) ,paradata ,01 natural sciences ,Paradata ,HA1-4737 ,American Community Survey ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Telephone interview ,Intervention (counseling) ,Family medicine ,Respondent ,medicine ,Research studies ,030212 general & internal medicine ,0101 mathematics ,Marketing ,Personal interview - Abstract
The notion of respondent contact burden in sample surveys is defined, and a multi-stage process to develop policies for curtailing nonresponse follow-up is described with the goal of reducing this burden on prospective survey respondents. The method depends on contact history paradata containing information about contact attempts both for respondents and for sampled nonrespondents. By analysis of past data, policies to stop case follow-up based on control variables measured in paradata can be developed by calculating propensities to respond for paradata-defined subgroups of sampled cases. Competing policies can be assessed by comparing outcomes (lost interviews, numbers of contacts, patterns of reluctant participation, or refusal to participate) as if these stopping policies had been followed in past data. Finally, embedded survey experiments may be used to assess contact-burden reduction policies when these are implemented in the field. The multi-stage method described here abstracts the stages followed in a series of research studies aimed at reducing contact burden in the Computer Assisted Telephone Interview (CATI) and Computer Assisted Personal Interview (CAPI) modes of the American Community Survey (ACS), which culminated in implementation of policy changes in the ACS.
- Published
- 2017
- Full Text
- View/download PDF
5. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models
- Author
-
Estelle Russek-Cohen, Eric V. Slud, and Meiyu Shen
- Subjects
Pharmacology ,Statistics and Probability ,media_common.quotation_subject ,Crossover ,Univariate ,Cmax ,Bioequivalence ,Crossover study ,Therapeutic Equivalency ,Pharmacokinetics ,Area Under Curve ,Statistics ,Range (statistics) ,Drugs, Generic ,Humans ,Computer Simulation ,Pharmacology (medical) ,Normality ,Statistical Distributions ,Mathematics ,media_common - Abstract
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC)1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
- Published
- 2016
- Full Text
- View/download PDF
6. Real-time dengue forecast for outbreak alerts in Southern Taiwan
- Author
-
Jing-Dong Chou, Hsiao-Yu Wu, Hsiao-Hui Tsou, Chao A. Hsiung, Chiu-Wen Chang, Hui-Pin Ho, Ya-Ting Hsu, Yu-Chieh Cheng, Jui-Hun Chang, Fang-Jing Lee, Eric V. Slud, Te-Pin Chang, Ching-Len Liao, Chun-Hong Chen, Tzai-Hung Wen, Wen-Feng Hung, and Pei-Sheng Lin
- Subjects
0301 basic medicine ,Viral Diseases ,Atmospheric Science ,Rain ,RC955-962 ,Dengue Fever ,Disease Outbreaks ,Dengue fever ,Geographical Locations ,Dengue ,Mathematical and Statistical Techniques ,0302 clinical medicine ,Arctic medicine. Tropical medicine ,Epidemiology ,Medicine and Health Sciences ,Public and Occupational Health ,Statistics ,Temperature ,Infectious Diseases ,Geography ,Physical Sciences ,Public aspects of medicine ,RA1-1270 ,Research Article ,Neglected Tropical Diseases ,medicine.medical_specialty ,Asia ,Southern taiwan ,030231 tropical medicine ,Taiwan ,Research and Analysis Methods ,03 medical and health sciences ,Meteorology ,Environmental health ,medicine ,Humans ,Statistical Methods ,Weather ,Models, Statistical ,Public health ,Public Health, Environmental and Occupational Health ,Outbreak ,Humidity ,Tropical Diseases ,medicine.disease ,Dengue outbreak ,030104 developmental biology ,People and Places ,Earth Sciences ,Mathematics ,Forecasting - Abstract
Dengue fever is a viral disease transmitted by mosquitoes. In recent decades, dengue fever has spread throughout the world. In 2014 and 2015, southern Taiwan experienced its most serious dengue outbreak in recent years. Some statistical models have been established in the past, however, these models may not be suitable for predicting huge outbreaks in 2014 and 2015. The control of dengue fever has become the primary task of local health agencies. This study attempts to predict the occurrence of dengue fever in order to achieve the purpose of timely warning. We applied a newly developed autoregressive model (AR model) to assess the association between daily weather variability and daily dengue case number in 2014 and 2015 in Kaohsiung, the largest city in southern Taiwan. This model also contained additional lagged weather predictors, and developed 5-day-ahead and 15-day-ahead predictive models. Our results indicate that numbers of dengue cases in Kaohsiung are associated with humidity and the biting rate (BR). Our model is simple, intuitive and easy to use. The developed model can be embedded in a "real-time" schedule, and the data (at present) can be updated daily or weekly based on the needs of public health workers. In this study, a simple model using only meteorological factors performed well. The proposed real-time forecast model can help health agencies take public health actions to mitigate the influences of the epidemic., Author summary Meteorological conditions are the most frequently mentioned factors in the study of dengue fever. Some of the main factors other than the purely meteorological about which the public-health authorities might have data, such as numbers of cases or other current measurements of dengue outbreaks in neighboring cities, had been used in some of the past dengue studies. In this study, we developed models for predicting dengue case number based on past dengue case data and meteorological data. The goal of the models is to provide early warning of the occurrence of dengue fever to assist public health agencies in preparing an epidemic response plan.
- Published
- 2020
- Full Text
- View/download PDF
7. Statistical analysis of co-occurrence patterns in microbial presence-absence datasets
- Author
-
William F. Fagan, Eric V. Slud, Peter Thielen, Joshua T. Wolfe, Sharon Bewick, Florian P. Breitwieser, Shishir Paudel, Thomas Mehoke, Arjun Adhikari, David K. Karig, and Kumar P. Mainali
- Subjects
0106 biological sciences ,0301 basic medicine ,Jaccard index ,lcsh:Medicine ,Invasive Species ,Datasets as Topic ,01 natural sciences ,Database and Informatics Methods ,Mathematical and Statistical Techniques ,Statistics ,lcsh:Science ,Macroecology ,Statistical Data ,Multidisciplinary ,Ecology ,Mathematical Models ,Microbiota ,Genomics ,Genomic Databases ,Community Ecology ,Medical Microbiology ,Physical Sciences ,Sequence Analysis ,Statistics (Mathematics) ,Research Article ,Correlation coefficient ,Bioinformatics ,Rare species ,Sequence Databases ,Microbial Genomics ,Biology ,Research and Analysis Methods ,010603 evolutionary biology ,Microbiology ,Microbial Ecology ,03 medical and health sciences ,Similarity (network science) ,Species Colonization ,Genetics ,Spurious relationship ,Null model ,lcsh:R ,Ecology and Environmental Sciences ,Co-occurrence ,Biology and Life Sciences ,Computational Biology ,Genome Analysis ,030104 developmental biology ,Biological Databases ,lcsh:Q ,Microbiome ,Mathematics - Abstract
Drawing on a long history in macroecology, correlation analysis of microbiome datasets is becoming a common practice for identifying relationships or shared ecological niches among bacterial taxa. However, many of the statistical issues that plague such analyses in macroscale communities remain unresolved for microbial communities. Here, we discuss problems in the analysis of microbial species correlations based on presence-absence data. We focus on presence-absence data because this information is more readily obtainable from sequencing studies, especially for whole-genome sequencing, where abundance estimation is still in its infancy. First, we show how Pearson's correlation coefficient (r) and Jaccard's index (J)-two of the most common metrics for correlation analysis of presence-absence data-can contradict each other when applied to a typical microbiome dataset. In our dataset, for example, 14% of species-pairs predicted to be significantly correlated by r were not predicted to be significantly correlated using J, while 37.4% of species-pairs predicted to be significantly correlated by J were not predicted to be significantly correlated using r. Mismatch was particularly common among species-pairs with at least one rare species (
- Published
- 2017
8. Modeling log-linear conditional probabilities for estimation in surveys
- Author
-
Yves Thibaudeau, Eric V. Slud, and Alfred Gottschalck
- Subjects
Log-linear model ,Statistics and Probability ,model calibration ,Chain rule (probability) ,05 social sciences ,Estimator ,Horvitz–Thompson estimator ,Conditional probability distribution ,01 natural sciences ,010104 statistics & probability ,Modeling and Simulation ,0502 economics and business ,Statistics ,Econometrics ,Survey data collection ,conditional probability ,0101 mathematics ,Statistics, Probability and Uncertainty ,Survey of Income and Program Participation ,Conditional variance ,050205 econometrics ,Mathematics - Abstract
The Survey of Income and Program Participation (SIPP) is a survey with a longitudinal structure and complex nonignorable design, for which correct estimation requires using the weights. The longitudinal setting also suggests conditional-independence relations between survey variables and early- versus late-wave employment classifications. We state original assumptions justifying an extension of the partially model-based approach of Pfeffermann, Skinner and Humphreys [J. Roy. Statist. Soc. Ser. A 161 (1998) 13–32], accounting for the design of SIPP and similar longitudinal surveys. Our assumptions support the use of log-linear models of longitudinal survey data. We highlight the potential they offer for simultaneous bias-control and reduction of sampling error relative to direct methods when applied to small subdomains and cells. Our assumptions allow us to innovate by showing how to rigorously use only a longitudinal survey to estimate a complex log-linear longitudinal association structure and embed it in cross-sectional totals to construct estimators that can be more efficient than direct estimators for small cells.
- Published
- 2017
- Full Text
- View/download PDF
9. Goodness of Fit Tests for Linear Mixed Models
- Author
-
Ruth M. Pfeiffer, Min Tang, and Eric V. Slud
- Subjects
Statistics and Probability ,Numerical Analysis ,Regression analysis ,Random effects model ,Generalized linear mixed model ,Article ,symbols.namesake ,Goodness of fit ,Likelihood-ratio test ,Covariate ,Statistics ,Test statistic ,symbols ,Statistics, Probability and Uncertainty ,Fisher information ,Mathematics - Abstract
Linear mixed models (LMMs) are widely used for regression analysis of data that are assumed to be clustered or correlated. Assessing model fit is important for valid inference but to date no confirmatory tests are available to assess the adequacy of the fixed effects part of LMMs against general alternatives. We therefore propose a class of goodness-of-fit tests for the mean structure of LMMs. Our test statistic is a quadratic form of the difference between observed values and the values expected under the estimated model in cells defined by a partition of the covariate space. We show that this test statistic has an asymptotic chi-squared distribution when model parameters are estimated by maximum likelihood or by least squares and method of moments, and study its power under local alternatives both analytically and in simulations. Data on repeated measurements of thyroglobulin from individuals exposed to the accident at the Chernobyl power plant in 1986 are used to illustrate the proposed test.
- Published
- 2017
10. Small-area estimation based on survey data from a left-censored Fay–Herriot model
- Author
-
Tapabrata Maiti and Eric V. Slud
- Subjects
Statistics and Probability ,Estimation ,Small area estimation ,Mean squared error ,Applied Mathematics ,Statistics ,Econometrics ,Survey data collection ,Estimator ,Bias correction ,Context (language use) ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
This paper develops methodology for survey estimation and small-area prediction using Fay–Herriot (1979) models in which the responses are left-censored. Parameter and small-area estimators are derived both by censored-data likelihoods and by an estimating-equation approach which adjusts a Fay–Herriot analysis restricted to the uncensored observations. Formulas for variances of estimators and mean-squared errors of small-area predictions are provided and supported by a simulation study. The methodology is applied to provide diagnostics for the left-censored Fay–Herriot model which are illustrated in the context of the Census Bureau's ongoing Small-Area Income and Poverty Estimation (SAIPE) project.
- Published
- 2011
- Full Text
- View/download PDF
11. Mean-Squared Error Estimation in Transformed Fay–Herriot Models
- Author
-
Tapabrata Maiti and Eric V. Slud
- Subjects
Statistics and Probability ,Estimation ,Variable (computer science) ,Small area estimation ,Mean squared error ,Statistics ,Econometrics ,Estimator ,Survey sampling ,Point (geometry) ,Statistics, Probability and Uncertainty ,Reciprocal ,Mathematics - Abstract
Summary The problem of accurately estimating the mean-squared error of small area estimators within a Fay–Herriot normal error model is studied theoretically in the common setting where the model is fitted to a logarithmically transformed response variable. For bias-corrected empirical best linear unbiased predictor small area point estimators, mean-squared error formulae and estimators are provided, with biases of smaller order than the reciprocal of the number of small areas. The performance of these mean-squared error estimators is illustrated by a simulation study and a real data example relating to the county level estimation of child poverty rates in the US Census Bureau's on-going ‘Small area income and poverty estimation’ project.
- Published
- 2006
- Full Text
- View/download PDF
12. Efficient semiparametric estimators via modified profile likelihood
- Author
-
Filia Vonta and Eric V. Slud
- Subjects
Statistics and Probability ,Applied Mathematics ,Nonparametric statistics ,Estimator ,Semiparametric model ,Efficient estimator ,Statistics ,Linear regression ,Consistent estimator ,Applied mathematics ,Nuisance parameter ,Semiparametric regression ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
A new strategy is developed for obtaining large-sample efficient estimators of finite-dimensional parameters β within semiparametric statistical models. The key idea is to maximize over β a nonparametric log-likelihood with the infinite-dimensional nuisance parameter λ replaced by a consistent preliminary estimator λ ˜ β of the Kullback–Leibler minimizing value λ β for fixed β . It is shown that the parametric submodel with Kullback–Leibler minimizer substituted for λ is generally a least-favorable model. Results extending those of Severini and Wong (Ann. Statist. 20 (1992) 1768) then establish efficiency of the estimator of β maximizing log-likelihood with λ replaced for fixed β by λ ˜ β . These theoretical results are specialized to censored linear regression and to a class of semiparametric survival analysis regression models including the proportional hazards models with unobserved random effect or `frailty', the latter through results of Slud and Vonta (Scand. J. Statist. 31 (2004) 21) characterizing the restricted Kullback–Leibler information minimizers.
- Published
- 2005
- Full Text
- View/download PDF
13. Consistency of the NPML Estimator in the Right-Censored Transformation Model
- Author
-
Eric V. Slud and Filia Vonta
- Subjects
Statistics and Probability ,Survival function ,Consistency (statistics) ,Expectation–maximization algorithm ,Statistics ,Consistent estimator ,Estimator ,Limit (mathematics) ,Function (mathematics) ,Statistics, Probability and Uncertainty ,Marginal distribution ,Mathematics - Abstract
This paper studies the representation and large-sample consistency for non- parametric maximum likelihood estimators (NPMLEs) of an unknown baseline continuous cumu- lative-hazard-type function and parameter of group survival difference, based on right-censored two-sample survival data with marginal survival function assumed to follow a transformation model, a slight generalization of the class of frailty survival regression models. The paper's main theoretical results are existence and unique a.s. limit, characterized variationally, for large data samples of the NPMLE of baseline nuisance function in an appropriately defined neighbourhood of the true function when the group difference parameter is fixed, leading to consistency of the NPMLE when the difference parameter is fixed at a consistent estimator of its true value. The joint NPMLE is also shown to be consistent. An algorithm for computing it numerically, based directly on likelihood equations in place of the expectation-maximization (EM) algorithm, is illustrated with real data.
- Published
- 2004
- Full Text
- View/download PDF
14. Exact calculation of power and sample size in bioequivalence studies using two one-sided tests
- Author
-
Eric V. Slud, Meiyu Shen, and Estelle Russek-Cohen
- Subjects
Pharmacology ,Statistics and Probability ,Cross-Over Studies ,Models, Statistical ,Monte Carlo method ,Crossover ,Univariate ,Context (language use) ,Bivariate analysis ,Bioequivalence ,Pharmaceutical Preparations ,Therapeutic Equivalency ,Sample size determination ,Sample Size ,Statistics ,Applied mathematics ,Humans ,Pharmacology (medical) ,Power function ,Mathematics - Abstract
The number of subjects in a pharmacokinetic two-period two-treatment crossover bioequivalence study is typically small, most often less than 60. The most common approach to testing for bioequivalence is the two one-sided tests procedure. No explicit mathematical formula for the power function in the context of the two one-sided tests procedure exists in the statistical literature, although the exact power based on Owen's special case of bivariate noncentral t-distribution has been tabulated and graphed. Several approximations have previously been published for the probability of rejection in the two one-sided tests procedure for crossover bioequivalence studies. These approximations and associated sample size formulas are reviewed in this article and compared for various parameter combinations with exact power formulas derived here, which are computed analytically as univariate integrals and which have been validated by Monte Carlo simulations. The exact formulas for power and sample size are shown to improve markedly in realistic parameter settings over the previous approximations.
- Published
- 2014
15. Testing for Imperfect Debugging in Software Reliability
- Author
-
Eric V. Slud
- Subjects
Statistics and Probability ,Score test ,business.industry ,media_common.quotation_subject ,Order statistic ,Score ,Mixture model ,Software quality ,Software ,Debugging ,Likelihood-ratio test ,Statistics ,Statistics, Probability and Uncertainty ,business ,Algorithm ,media_common ,Mathematics - Abstract
This paper continues the study of the software reliability model of Fakhre- Zakeri & Slud (1995), an "exponential order statistic model" in the sense of Miller (1986) with general mixing distribution, imperfect debugging and large-sample asymptotics reflect- ing increase of the initial number of bugs with software size. The parameters of the model are 0 (proportional to the initial number of bugs in the software), G(., p) (the mixing df, with finite dimensional unknown parameter , for the rates A)i with which the bugs in the software cause observable system failures), and p (the probability with which a detected bug is instantaneously replaced with another bug instead of being removed). Maximum likelihood estimation theory for (0,p,p) is applied to construct a likelihood-based score test for large sample data of the hypothesis of "perfect debugging" (p = 0) vs "imperfect" (p > 0) within the models studied. There are important models (including the Jelinski-Moranda) under which the score statistics with 1/V/i normalization are asymptotically degenerate. These statistics, illustrated on a software reliability data of Musa (1980), can serve nevertheless as important diagnostics for inadequacy of simple models.
- Published
- 1997
- Full Text
- View/download PDF
16. Miscellanea. Semiparametric two-sample tests in clinical trials with a post-randomisation response indicator
- Author
-
Edward L. Korn and Eric V. Slud
- Subjects
Statistics and Probability ,Clinical trial ,Applied Mathematics ,General Mathematics ,Statistics ,Econometrics ,Estimator ,Observational study ,Two sample ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,Agricultural and Biological Sciences (miscellaneous) ,Mathematics - Abstract
In many clinical trials involving survival endpoints, one has additional data on some binary indicator of'response', such as initial tumour shrinkage in cancer trials. This paper studies the case of randomised clinical trials where the response indicator is available shortly after randomisation, and where one can assume that, within each stratum defined by the response indicator, a two treatment-group proportional-hazards model holds. The same model may also describe some incompletely randomised or observational studies. Asymptotic relative efficiencies for Kaplan-Meier-based estimators versus maximum partial likelihood estimators are examined under this model for estimating either the difference in survival probabilities at a specified time or the parameter estimated by the logrank numerator. It is shown that the efficiency gains using the model are more promising when estimating the difference in survival probabilities. An example is given comparing the long-term survival experience of two groups of patients with advanced Hodgkin's disease.
- Published
- 1997
- Full Text
- View/download PDF
17. Inaccuracy rates and Hodges-Lehmann large deviation rates for parametric inferences with nuisance parameters
- Author
-
Eric V. Slud and Antonis Koutsoukos
- Subjects
Statistics and Probability ,Score test ,Hodges–Lehmann estimator ,Applied Mathematics ,Test score ,Scalar (mathematics) ,Statistics ,Nuisance parameter ,Statistics, Probability and Uncertainty ,Implicit function theorem ,Probability measure ,Parametric statistics ,Mathematics - Abstract
In the context of parametric inference for a scalar parameter β in the presence of a finite-dimensional nuisance parameter λ based on a large random sample X 1 , …, X n , this paper calculates an exact one-sided inaccuracy rate for maximum-likelihood and M-estimators, as well as the Hodges-Lehmann (1956) large deviation rate for type-II error probabilities under fixed alternatives. The method is to couple the large-deviation theorems of Groeneboom et al. (1979) for empirical measures with a characterization via the Implicit Function Theorem of ‘least favorable measures’ extremizing the Kullback-Leibler information functional over statistically interesting sets of measures.
- Published
- 1995
- Full Text
- View/download PDF
18. Maximin efficiency-robust tests and some extensions
- Author
-
Sudip Bose and Eric V. Slud
- Subjects
Statistics and Probability ,Asymptotic power ,Applied Mathematics ,Bayes test ,Decision theory ,Rank (computer programming) ,Statistics ,Bayesian probability ,Score ,Statistics, Probability and Uncertainty ,Minimax ,Mathematics - Abstract
The Maximin Efficiency-Robust Test idea of Gastwirth (1966) was to maximize the minimum asymptotic power (for fixed size) versus special local families of alternatives over some specially chosen families of score statistics. This approach is reviewed from a general decision-theoretical perspective, including some Bayesian variants. For two-sample censored-data rank tests and stochastically ordered but not proportional-hazard alternatives, the MERT approach leads to customized weighted-logrank tests for which the weights depend on estimated random-censoring distributions. Examples include statistics which perform well against both Lehmann and logistic alternatives or against families of alternatives which include increasing, decreasing, and ‘bathtub-shaped’ hazards.
- Published
- 1995
- Full Text
- View/download PDF
19. Mixture models for reliability of software with imperfect debugging: Identifiability of parameters
- Author
-
I. Fakhre-Zakeri and Eric V. Slud
- Subjects
Discrete mathematics ,Hazard (logic) ,Logarithm ,Estimation theory ,Statistics ,Identifiability ,Function (mathematics) ,Electrical and Electronic Engineering ,Stieltjes transformation ,Safety, Risk, Reliability and Quality ,Lambda ,Mixture model ,Mathematics - Abstract
A class of software-reliability mixture-type models is introduced in which individual bugs come with i.i.d. random failure-causation rates /spl lambda/, and have conditional hazard function /spl phi/(t|/spl lambda/) for software failure times. The models allow the possibility of imperfect debugging, in that at each failure a new bug (possibly with another rate-parameter /spl lambda/) is introduced, statistically independently of the past, with probability p. For /spl phi/(t|/spl lambda/)=/spl lambda/, it is shown that the unknown parameters p, n/sub 0/ (the initial number of bugs), and G (the Cdf for /spl lambda/) are uniquely determined from the probability law of the failure-count function (N(t), 0/spl les/t/spl les/spl delta/), for arbitrary /spl delta/>0. The parameters (n/sub 0/,G) are also uniquely determined by the mean failure-count function E{N(t)} when p is known (e.g., is assumed to be 0), but not when p is unknown. For special parametric classes of G, the parameters (n/sub 0/,p,G) are uniquely determined by (E{N(t)}, 0/spl les/t/spl les/spl delta/). >
- Published
- 1995
- Full Text
- View/download PDF
20. Best precedence tests for censored data
- Author
-
Eric V. Slud
- Subjects
Statistics and Probability ,Optimal test ,Applied Mathematics ,Rank (computer programming) ,Sample (statistics) ,Extension (predicate logic) ,Survival data ,Fixed duration ,Statistics ,Econometrics ,Statistics, Probability and Uncertainty ,Kaplan–Meier estimator ,Quantile ,Mathematics - Abstract
The prevalence of survival analyses based on a fixed duration of time-on-test, together with the need for generally powerful two-sample censored-data rank tests against stochastically ordered but not proportional-hazard alternatives, are used to motivate an extension of ‘precedence tests’ (Nelson (1963), Lin and Sukhatme (1989)) to right-censored survival data. The idea is to compare the r -th Kaplan-Meier quantile from the first sample with the s -th Kaplan-Meier quantile from the second sample, where r and s are nearby values chosen to give size α and best power against local proportional-hazards alternatives.
- Published
- 1992
- Full Text
- View/download PDF
21. Relative efficiency of the log rank test within a multiplicative intensity model
- Author
-
Eric V. Slud
- Subjects
Statistics and Probability ,Hazard (logic) ,Score test ,Applied Mathematics ,General Mathematics ,Multiplicative function ,Agricultural and Biological Sciences (miscellaneous) ,Intensity (physics) ,Log-rank test ,Efficiency ,Statistics ,Covariate ,Log-linear model ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,Mathematics - Abstract
SUMMARY For large-sample clinical trials with independent individuals randomly allocated to two treatment groups, in which survival times follow a log linear multiplicative intensity model with treatment group as one covariate, this paper calculates the asymptotic relative efficiency of the log rank test for treatment effect as compared with the optimal score test. The method is to exhibit the failure hazard intensity, not of proportional hazards form, obtained by ignoring all covariates other than treatment group. The efficiency formulae are illustrated in two examples, and estimation from data of the loss of efficiency is illustrated for two clinical trial datasets.
- Published
- 1991
- Full Text
- View/download PDF
22. Letter to the editor by the authors of Exact Calculation of Power and Sample Size in Bioequivalence Studies Using Two One-sided Tests, Pharmaceutical Statistics, DOI: 10.1002/pst.1666
- Author
-
Meiyu Shen, Eric V. Slud, and Estelle Russek-Cohen
- Subjects
Pharmacology ,Statistics and Probability ,Food and drug administration ,Letter to the editor ,Sample size determination ,One sided ,Statistics ,Econometrics ,Pharmacology (medical) ,Bioequivalence ,Mathematics - Abstract
This article reflects the views of the authors and should not be construed to be those of the US Food and Drug Administration. Copyright © 2015 John Wiley & Sons, Ltd.
- Published
- 2015
- Full Text
- View/download PDF
23. Optimal stopping of sequential size-dependent search
- Author
-
Eric V. Slud and Issa Fakhre-Zakeri
- Subjects
Statistics and Probability ,education.field_of_study ,mixture model ,Population ,Sampling (statistics) ,Function (mathematics) ,Mixture model ,size-dependent successive sampling ,Convexity ,loss function ,Asymptotically optimal algorithm ,imperfect debugging ,62L15 ,Statistics ,sequential search ,Optimal stopping ,exponential order statistic model ,Statistics, Probability and Uncertainty ,Asympotically optimal stopping rule ,education ,60G40 ,Mathematics ,Linear search - Abstract
In many areas of application, one searches within finite populations for items of interest, where the probability of sampling an item is proportional to a random size attribute from an i.i.d. superpopulation of attributes which may or may not be observable upon discovery. Here we treat the problem of asymptotically optimal stopping rules for size-dependent searches of this type, as the size of the underlying population grows, where the loss function includes an asymptotically smooth time-dependent cost, a constant cost per item sampled and a cost per undiscovered item which may depend on the size attribute of the undiscovered item. Under some regularity and convexity conditions related to the asymptotic expected loss, we characterize asymptotically optimal rules even when the initial population size and the distribution of size attributes are unknown. We direct especial attention to applications in software reliability, where the items of interest are software faults ("bugs"). In this setting, the size attributes will not be observable when faults are found, and, in addition, our search model allows new bugs to be introduced into the software when faults are detected "imperfect debugging"). Our results extend those of Dalal and Mallows and Kramer and Starr, and are illustrated in the perfect-debugging case on a previously analyzed dataset of Musa.
- Published
- 1996
24. Nonparametric Identifiability of Marginal Survival Distributions in the Presence of Dependent Competing Risks and a Prognostic Covariate
- Author
-
Ian W. McKeague and Eric V. Slud
- Subjects
Conditional independence ,Goodness of fit ,Joint probability distribution ,Censoring (clinical trials) ,Statistics ,Covariate ,Econometrics ,Nonparametric statistics ,Identifiability ,Marginal distribution ,Mathematics - Abstract
It is well known that survival data randomly censored from the right by deaths from a competing risk do not allow nonparametric identifiability of marginal survival distributions when survival times and competing-risk censoring times are dependent (Tsiatis 1975). Parametric models for the joint distribution of survival and competing-risk censoring times cannot solve the problem since the goodness of fit of such models cannot be tested with observable data. Nevertheless, there are many such settings where marginal “latent” survival distributions are a desirable object of inference, expressing most clearly the underlying biological failure mechanism disentangled from physiologically distinct effects. One hope to overcome the obstacle of nonidentifiability is to make use of observable covariate data which are prognostic only for the latent survival times and not for the competing risk. In this paper, it is shown how the marginal distribution of the latent survival time T can be nonparametrically identifiable when only the data min(T, C), I[T≤C], and V are observed, where C is a latent competing-risk censoring time and V is an observed covariate such that C and V are conditionally independent given T.
- Published
- 1992
- Full Text
- View/download PDF
25. Analysis of Factorial Survival Experiments
- Author
-
Eric V. Slud
- Subjects
Statistics and Probability ,Factorial ,General Immunology and Microbiology ,Proportional hazards model ,Applied Mathematics ,General Medicine ,Factorial experiment ,Asymptotic theory (statistics) ,General Biochemistry, Genetics and Molecular Biology ,Covariate ,Statistics ,Econometrics ,Main effect ,General Agricultural and Biological Sciences ,Null hypothesis ,Mathematics ,Statistical hypothesis testing - Abstract
Several new methodological issues that arise within two-way factorial designs for survival experiments are discussed within the framework of asymptotic theory for the proportional hazards model with two binary treatment covariates. These issues include: the proper formulation of null hypotheses and alternatives, the choice among log-rank and adjusted or stratified log-rank statistics, the asymptotic correlation between test statistics for the separate main effects, the asymptotic power (under the various possible methods of analysis) of tests to detect main effects and interactions, the comparison of power to detect main effects within a 2 x 2 factorial design with power in a three-group trial where no patients are randomized simultaneously to both treatments, and the problems of analysis arising when accrual or exposure to one of the treatments is terminated early for ethical reasons.
- Published
- 1994
- Full Text
- View/download PDF
26. Two-Sample Repeated Significance Tests Based on the Modified Wilcoxon Statistic
- Author
-
Eric V. Slud and L. J. Wei
- Subjects
Statistics and Probability ,Asymptotic analysis ,Early stopping ,Wilcoxon signed-rank test ,Log-rank test ,symbols.namesake ,Significance testing ,Statistics ,symbols ,Two sample ,Statistics, Probability and Uncertainty ,Gaussian process ,Statistic ,Mathematics - Abstract
The asymptotic distribution theory of sequentially computed modified-Wilcoxon scores is developed for two-sample survival data with random staggered entry and random loss to follow-up. The asymptotic covariance indicates generally dependent modified-Wilcoxon increments, contradicting (the authors' reading of) Jones and Whitehead (1979). A repeated significance testing procedure is presented for testing the equality of two survival distributions based on the asymptotic theory. The early stopping properties of this procedure are illustrated by a prostate cancer example.
- Published
- 1982
- Full Text
- View/download PDF
27. Consistency and efficiency of inferences with the partial likelihood
- Author
-
Eric V. Slud
- Subjects
Statistics and Probability ,Score test ,Consistency (statistics) ,Applied Mathematics ,General Mathematics ,Statistics ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,Likelihood function ,Agricultural and Biological Sciences (miscellaneous) ,Likelihood principle ,Marginal likelihood ,Mathematics - Published
- 1982
- Full Text
- View/download PDF
28. Dependent competing risks and summary survival curves
- Author
-
Eric V. Slud and Larry Rubinstein
- Subjects
Statistics and Probability ,Waiting time ,Applied Mathematics ,General Mathematics ,Nonparametric statistics ,Estimator ,Conditional probability distribution ,Competing risks ,Agricultural and Biological Sciences (miscellaneous) ,Censoring (clinical trials) ,Statistics ,Econometrics ,Statistics, Probability and Uncertainty ,Marginal distribution ,General Agricultural and Biological Sciences ,Survival analysis ,Mathematics - Abstract
SUMMARY In many contexts where there is interest in inferring the marginal distribution of a survival time T subject to censoring embodied in a latent waiting time C, the times T and C may not be independent. This paper presents a new class of nonparametric assumptions on the conditional distribution of T given C and shows how they lead to consistent generalizations of the Kaplan & Meier (1958) survival curve estimator. The new survival curve estimators are used under weak assumptions to construct bounds on the marginal survival which can be much narrower than those of Peterson (1976). In stratified populations where T and C are independent only within strata examples indicate that the Kaplan-Meier estimator is often approximately consistent.
- Published
- 1983
- Full Text
- View/download PDF
29. Inefficiency of inferences with the partial likelihood
- Author
-
Eric V. Slud
- Subjects
Statistics and Probability ,Inference ,Regression analysis ,Function (mathematics) ,Interpretation (model theory) ,symbols.namesake ,Efficiency ,Dimension (vector space) ,Statistics ,Econometrics ,symbols ,Inefficiency ,Fisher information ,Mathematics - Abstract
In two-sample semiparametric survival models other than the Cox proportional-hazards regression model, it is shown that partial-likelihood inference of structural parameters in the presence of fully nonpararnetric nuisance-hazards typically has relative efficiency zero compared with fuii-Iikelihood infer -ence. The practical Interpretation of efficiencies in the pres-ence of infinite-dimensional nuisance-parameters is discussed, with reference to two important examples, namely a recent sur-vival regression-model of Clayton and Cuzick and a class of additive excess-risk models. Under the excess-risk models, a formula is derived for the large-sample information [which here is the same as the limiting Fisher information when the nuisance-parameter dimension gets large] for estimating the parameter of difference between two samples, as the nuisance function becomes fully nonpararnetric.
- Published
- 1986
- Full Text
- View/download PDF
30. Testing separate families of hypotheses using right-censored data
- Author
-
Eric V. Slud
- Subjects
Statistics and Probability ,Modeling and Simulation ,Statistics ,Mathematics - Published
- 1983
- Full Text
- View/download PDF
31. Sequential Linear Rank Tests for Two-Sample Censored Survival Data
- Author
-
Eric V. Slud
- Subjects
linear rank statistics ,62E20 ,Statistics and Probability ,Independent and identically distributed random variables ,logrank statistic ,sequential testing ,Invariance principle ,Censored survival data ,martingale central limit theorems ,law.invention ,Randomized controlled trial ,law ,Sequential analysis ,Statistics ,62L10 ,counting processes ,Statistics, Probability and Uncertainty ,60G42 ,Martingale (probability theory) ,Null hypothesis ,Statistic ,62G10 ,Central limit theorem ,Mathematics - Abstract
Under extremely general patterns of patient-arrival, allocation to treatment and loss to follow-up in (randomized) clinical trial settings, the sequentially computed logrank statistic (Mantel, 1966) is shown (under the null hypothesis of identically distributed lifetimes) to have exactly uncorrelated increments, and is shown via Rebolledo's (1980) martingale invariance principle to satisfy a functional central limit theorem, justifying sequential logrank tests of Jones and Whitehead (1979). Generalizations are made to other two-sample rank tests for censored survival data, and practical applicability to real randomized clinical trials is discussed.
- Published
- 1984
- Full Text
- View/download PDF
32. Simulation Studies on Increments of the Two-Sample Logrank Score Test for Survival Time Data, with Application to Group Sequential Boundaries
- Author
-
Eric V. Slud, Mitchell H. Gail, and David L. DeMets
- Subjects
Score test ,Statistics ,Group sequential ,Econometrics ,Two sample ,Time data ,Mathematics - Published
- 1982
33. How Dependent Causes of Death Can Make Risk Factors Appear Protective
- Author
-
Byar D and Eric V. Slud
- Subjects
Statistics and Probability ,General Immunology and Microbiology ,Applied Mathematics ,Statistics ,Covariate ,General Medicine ,General Agricultural and Biological Sciences ,Competing risks ,General Biochemistry, Genetics and Molecular Biology ,Mathematics - Abstract
It is shown, using the results of Slud and Rubinstein (1983, Biometrika 70, 643-649) in a specially constructed theoretical example, that competing latent failure times Ti and Ci and a two-level covariate Vi, if analyzed as though Ti and Ci are independent for each Vi level, can lead to exactly the wrong conclusion about the ordering of Pr(Ti greater than or equal to t[Vi = 1) and Pr(Ti greater than or equal to t[Vi = 0) for every t. This phenomenon can never be excluded on purely statistical grounds using such data and should be considered when interpreting data analyses involving competing risks.
- Published
- 1988
- Full Text
- View/download PDF
34. A Comparison of Reflected Versus Test-Based Confidence Intervals for the Median Survival Time, Based on Censored Data
- Author
-
David P. Byar, Eric V. Slud, and Sylvan B. Green
- Subjects
Statistics and Probability ,General Immunology and Microbiology ,Applied Mathematics ,Nonparametric statistics ,General Medicine ,Time based ,Censoring (statistics) ,General Biochemistry, Genetics and Molecular Biology ,Confidence interval ,Statistics ,Econometrics ,Statistical analysis ,General Agricultural and Biological Sciences ,Median survival ,Survival analysis ,Mathematics - Abstract
The small-sample performance of some recently proposed nonparametric methods of constructing confidence intervals for the median survival time, based on randomly right-censored data, is compared with that of two new methods. Most of these methods are equivalent for large samples. All proposed intervals are either 'test-based' or 'reflected' intervals, in the sense defined in the paper. Coverage probabilities for the interval estimates were obtained by exact calculation for uncensored data, and by stimulation for three life distributions and four censoring patterns. In the range of situations studied, 'test-based' methods often have less than nominal coverage, while the coverage of the new 'reflected' confidence intervals is closer to nominal (although somewhat conservative), and these intervals are easy to compute.
- Published
- 1984
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.