22 results on '"Schervish, Mark"'
Search Results
2. When Several Bayesians Agree That There Will Be No Reasoning to a Foregone Conclusion
- Author
-
Kadane, Joseph B., Schervish, Mark J., and Seidenfeld, Teddy
- Published
- 1996
3. An Optimal Multiple Decision Rule for Signs of Parameters
- Author
-
Bohrer, Robert and Schervish, Mark J.
- Published
- 1980
4. A Review of Multivariate Analysis
- Author
-
Schervish, Mark J.
- Published
- 1987
5. Graduate Education in Computational Statistics
- Author
-
Eddy, William F., Jones, Albyn C., Kass, Robert E., and Schervish, Mark J.
- Published
- 1987
- Full Text
- View/download PDF
6. User-Oriented Inference
- Author
-
Schervish, Mark J.
- Published
- 1983
- Full Text
- View/download PDF
7. Non-Conglomerability for Finite-Valued, Finitely Additive Probability
- Author
-
Seidenfeld, Teddy, Schervish, Mark J., and Kadane, Joseph B.
- Subjects
Mathematics::Logic ,Statistics ,FOS: Mathematics ,Mathematics::General Topology ,Probability - Abstract
We consider how an unconditional, finite-valued, finitely additive probability P on a countable set may localize its non-conglomerability (non-disintegrability). Non-conglomerability, a characteristic of merely finitely additive probability, occurs when the unconditional probability of an event P(E) lies outside the closed interval of conditional probability values, [infhe pi P(E|h), suphe pp(E|h)], taken from a countable partition p = hj:j=1,...}. The problem we address is how to identify events and partitions where a finite-valued, finitely additive probability fails to satisfy conglomerability. We focus on the extreme case of 2-valued finitely additive probabilities that are not countably additive. These are, equivalently, non-principal ultrafilters. Evidently, the challenge we face is that given a countable partition, at most one of its elements has positive probability under P. Thus, we must find ways of regulating the coherent conditional probabilities, given null events, that cohere with the unconditional probability P. Our analysis of P proceeds by the use of combinatorial properties of the associated non-principal ultrafilter UP. We show that when ultrafilter UP is not minimal in the Rudin-Keisler partial order of b(w)\w, we may locate a partition in which P fails to satisfy the conglomerability principle by examining (at most) countably many partitions. This result is then applied to finitely additive probabilities that assume only finitely many values. By contrast, if ultrafilter UP is Rudin-Keisler minimal, then P is simultaneously conglomerable in each finite collection of partitions, though not simultaneously conglomerable in all partitions.
- Published
- 2018
- Full Text
- View/download PDF
8. A Bayesian Hierarchical method for fitting multiple health endpoints in a toxicity study
- Author
-
Taeryon Choi, Schervish, Mark J., Ketra A. Schmitt, and Small, Mitchell
- Subjects
Statistics ,FOS: Mathematics ,Probability - Abstract
Bayesian hierarchical models are built to fit multiple health endpoints from a dose-response study of a toxic chemical, perchlorate. Perchlorate exposure results in iodine uptake inhibition in the thyroid, with health effects manifested by changes in blood hormone concentrations and histopathological effects on the thyroid. We propose linked empirical models to fit blood hormone concentration and thyroid histopathology data for rats exposed to perchlorate in the 90-day study of Springborn Laboratory Inc. (1998), based upon the assumed toxicological relationships between dose and the various endpoints. All of the models are fit in a Bayesian framework, and predictions about each endpoint in response to dose are simulated based on the posterior predictive distribution. A hierarchical model tries to exploit possible similarities between different combinations of sex and exposure duration, and it allows us to produce more stable estimates of dose-response curves. We also illustrate how the hierarchical model allows us to address additional questions that arise after the analysis.
- Published
- 2010
- Full Text
- View/download PDF
9. Posterior Consistency in Nonparametric Regression Problems under Gaussian Process Priors
- Author
-
Taeryon Choi and Schervish, Mark J.
- Subjects
Statistics::Machine Learning ,Statistics::Theory ,Statistics ,FOS: Mathematics ,Statistics::Methodology ,Statistics::Computation ,Probability - Abstract
Posterior consistency can be thought of as a theoretical justification of the Bayesian method. One of the most popular approaches to nonparametric Bayesian regression is to put a nonparametric prior distribution on the unknown regression function using Gaussian processes. In this paper, we study posterior consistency in nonparametric regression problems using Gaussian process priors. We use an extension of the theorem of Schwartz (1965) for nonidentically distributed observations, verifying its conditions when using Gaussian process priors for the regression function with normal or double exponential (Laplace) error distributions. We define a metric topology on the space of regression functions and then establish almost sure consistency of the posterior distribution. Our metric topology is weaker than the popular L1 topology. With additional assumptions, we prove almost sure consistency when the regression functions have L1 topologies. When the covariate (predictor) is assumed to be a random variable, we prove almost sure consistency for the joint density function of the response and predictor using the Hellinger metrics.
- Published
- 2007
- Full Text
- View/download PDF
10. Extensions of Expected Utility Theory and some Limitations of Pairwise Comparisons
- Author
-
Schervish, Mark J., Seidenfeld, Teddy, and Kadane, Joseph B.
- Subjects
Statistics ,FOS: Mathematics ,Probability - Abstract
We contrast three decisions rules that extend Expected Utility to contexts where a convex set of probabilities is used to depict uncertainty: -Maximin, Maximality, and -admissibility. The rules extend Expected Utility theory as they require that an option is inadmissible if there is another that carries greater expected utility for each probability in a (closed) convex set. If the convex set is a singleton, then each rule agrees with maximizing expected utility. We show that, even when the option set is convex, this pairwise comparison between acts may fail to identify those acts which are Bayes for some probability in a convex set that is not closed. This limitation affects two of the decision rules but not -admissibility, which is not a pairwise decision rule. -admissibility can be used to distinguish between two convex sets of probabilities that intersect all the same supporting hyperplanes.
- Published
- 2006
- Full Text
- View/download PDF
11. Covariance Tapering for Likelihood Based Estimation in Large Spatial Datasets
- Author
-
Kaufmann, Cari, Schervish, Mark J., and Nychka, Douglas
- Subjects
Statistics ,FOS: Mathematics ,Physics::Optics ,Physics::Atmospheric and Oceanic Physics ,Probability - Abstract
Maximum likelihood is an attractive method of estimating covariance parameters in spatial models based on Gaussian processes. However, calculating the likelihood can be computationally infeasible for large datasets, requiring O(n^3) observations. This article proposes the method of covariance tapering to approximate the likelihood in this setting. In this approach, covariance matrices are ``tapered,'' or multiplied element-wise by a sparse correlation matrix. The resulting matrices can then be manipulated using efficient sparse matrix algorithms. We propose two approximations to the Gaussian likelihood using tapering. One simply replaces the model covariance with a tapered version; the other is motivated by the theory of unbiased estimating equations. Focusing on the particular case of the Matérn class of covariance functions, we give conditions under which estimators maximizing the tapering approximations are, like the maximum likelihood estimator, strongly consistent. Moreover, we show in a simulation study that the tapering estimators can have sampling densities quite similar to that of the maximum likelihood estimate, even when the degree of tapering is severe. We illustrate the accuracy and computational gains of the tapering methods in an analysis of yearly total precipitation anomalies at weather stations in the United States.
- Published
- 2003
- Full Text
- View/download PDF
12. When coherent preferences may not preserve indifference between equivalent random variables: A price for unbounded utilities
- Author
-
Seidenfeld, Teddy, Schervish, Mark J., and Kadane, Joseph B.
- Subjects
Statistics ,FOS: Mathematics ,Probability - Abstract
We extend de Finetti’s (1974) theory of coherence to apply also to unbounded random variables. We show that for random variables with mandated infinite prevision, such as for the St. Petersburg gamble, coherence precludes indifference between equivalent random quantities. That is, we demonstrate when the prevision of the difference between two such equivalent random variables must be positive. This result conflicts with the usual approach to theories of Subjective Expected Utility, where preference is defined over lotteries. In addition, we explore similar results for unbounded variables when their previsions, though finite, exceed their expected values, as is permitted within de Finetti’s theory. In such cases, the decision maker’s coherent preferences over random quantities is not even a function of probability and utility. One upshot of these findings is to explain further the differences between Savage’s theory (1954), which requires bounded utility for non-simple acts, and de Finetti’s theory, which does not. And it raises a question whether there is a theory that fits between these two.
- Published
- 2002
- Full Text
- View/download PDF
13. A Rate of Incoherence Applied to Fixed‐Level Testing
- Author
-
Schervish, Mark J., Seidenfeld, Teddy, and Kadane, Joseph B.
- Subjects
Statistics ,FOS: Mathematics ,Probability - Abstract
It has long been known that the practice of testing all hypotheses at the same level (such as 0.05), regardless of the distribution of the data, is not consistent with Bayesian expected utility maximization. According to de Finetti’s “Dutch Book” argument, procedures that are not consistent with expected utility maximization are incoherent and they lead to gambles that are sure to lose no matter what happens. In this paper, we use a method to measure the rate at which incoherent procedures are sure to lose, so that we can distinguish slightly incoherent procedures from grossly incoherent ones. We present an analysis of testing a simple hypothesis against a simple alternative as a case‐study of how the method can work
- Published
- 2001
- Full Text
- View/download PDF
14. Measuring Incoherence
- Author
-
Schervish, Mark J., Seidenfeld, Teddy, and Kadane, Joseph B.
- Subjects
Statistics ,FOS: Mathematics ,Probability - Abstract
The degree of incoherence, when previsions are not made in accordance with a probability measure, is measured by the rate at which an incoherent bookie can be made a sure loser. Each bet is rescaled by one of several normalizations to account for the overall sizes of bets. For each normalization, the sure loss for incoherent previsions is divided by the normalization to determine the rate of incoherence. We study several properties of normalizations and degrees of incoherence and present some examples. Potential applications include the degree of incoherence of classical statistical procedures.
- Published
- 2001
- Full Text
- View/download PDF
15. Forecasting with imprecise probabilities
- Author
-
Seidenfeld, Teddy, Schervish, Mark J., and Kadane, Joseph B.
- Subjects
- *
PREDICTION theory , *PROBABILITY theory , *GENERALIZATION , *DECISION making , *SET theory , *STATISTICS - Abstract
Abstract: We review de Finetti’s two coherence criteria for determinate probabilities: coherence 1 defined in terms of previsions for a set of events that are undominated by the status quo – previsions immune to a sure-loss – and coherence 2 defined in terms of forecasts for events undominated in Brier score by a rival forecast. We propose a criterion of IP-coherence2 based on a generalization of Brier score for IP-forecasts that uses 1-sided, lower and upper, probability forecasts. However, whereas Brier score is a strictly proper scoring rule for eliciting determinate probabilities, we show that there is no real-valued strictly proper IP-score. Nonetheless, with respect to either of two decision rules – Γ-maximin or (Levi’s) E-admissibility-+-Γ-maximin – we give a lexicographic strictly proper IP-scoring rule that is based on Brier score. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
16. Applications of Parallel Computation to Statistical Inference.
- Author
-
Schervish, Mark J.
- Subjects
- *
MATHEMATICAL statistics , *EXAMPLE , *PERSONAL computers , *ALGORITHMS , *REASONING , *DATA transmission systems , *STATISTICS , *MATHEMATICS - Abstract
Recent advances in parallel computation (e.g., Eddy and Schervish 1986; Gardner, Gerard, Mowers, Nemeth, and Schnabel 1986) have made it possible for a network of microcomputers to act together as a parallel processor using data-flow algorithms (see O'Leary and Stewart 1985). A data-flow algorithm is one in which the sequence of computations is not scheduled a priori, but rather is determined by the order in which computations are completed. Many statistical problems can be adapted to dataflow algorithms. One requirement is that the computation can be broken into independent parts, that is, parts that can be performed in any order. For example, calculating the average of a large number of data values requires adding them up, which can be done in any order. Suppose that an application can be broken into independent parts and one has p processors available to perform the computations. Then each of the p processors can work on one of the parts at a time. When one processor finishes, it can work on the next part of the computation. Under the right conditions, this can allow a speedup of as much as a factor of p in the computation. The problems that must be solved are approximating the right conditions and breaking the application up accordingly. This article discusses the problem of subdividing a large task in such a way that it can run efficiently on a network of processors communicating over an Ethernet. The discussion centers on several examples. The first example concerns a mode of inference called discrete-finite inference (see Eddy and Schervish 1986). The second example is large-sample data analysis (see Kim and Schervish, in press). The third example concerns multiprocess time series models (see Schervish and Tsay 1988). The fourth example (Lehoczky and Schervish 1987) is a hierarchical model for the responses to... [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
17. Resolution of Godambe's paradox.
- Author
-
Genest, Christian and Schervish, Mark J.
- Subjects
- *
STATISTICS , *RANDOM variables , *MATHEMATICAL statistics , *MATHEMATICS problems & exercises , *MATHEMATICAL analysis - Abstract
Deux versions d'un paradoxe statistique ayant trait à un certain 'principe d'ancillarité' ont récemment été proposées par Godambe (1979, 1982). Dans cette note, nous montrons que les deux versions de son paradoxe sont en fait d'une nature totalement différente et nous les résolvons toutes les deux en les examinant à la lumière du concept de répartition aléatoire. La première version du paradoxe de Godambe a déja été critiquée par Good (1980), mais nous croyons que son argumentation est défaillante. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
18. Non-central Studentized Maximum and Related Multiple-t Probabilities.
- Author
-
Bohrer, Robert, Schervish, Mark, and Shefi, Judith
- Subjects
ALGORITHMS ,STATISTICS ,PROBABILITY theory - Abstract
Presents a statistical algorithm for the computation of the non-central studentized maximum and related multiple-t probabilities. Cases wherein the algorithm can be used; Description and numerical method; Restrictions, time and accuracy.
- Published
- 1982
- Full Text
- View/download PDF
19. A One-Sided Goodness-of-Fit Test for a Multinomial Population: Comment.
- Author
-
Schervish, Mark J.
- Subjects
- *
BAYESIAN analysis , *GOODNESS-of-fit tests , *DISTRIBUTION (Probability theory) , *STATISTICAL hypothesis testing , *PROBABILITY theory , *HYPOTHESIS , *SCIENCE , *STATISTICS , *MATHEMATICS - Abstract
The article presents the author's comments on an article on a one-sided goodness-of-fit test for an interesting problem involving multinomial proportions. Researcher I. Greenberg presents a likelihood ratio test for an interesting problem involving multinomial proportions. The solution was difficult to obtain and very complicated to describe, despite the extreme simplicity of the statement of the problem. But it is very often the case in statistics and mathematics that a problem that is simple to state requires a very complicated solution. In the problem considered by Greenberg, however, there is a solution of simplicity equal to that of the problem's statement if one adopts the Bayesian approach. Advantages of using the Bayesian approach in the problem are clear. First, the solution is as straightforward and simple as the problem. Second, the solution is easy to interpret as a degree of confidence in the hypothesis after seeing the data. There is no need to draw convoluted conclusions of the form "no sufficient evidence exists to reject the null hypothesis, although some indications of its falsity exist" or "the selection rule is consistent with each protected group's representation in the population but overrepresentation of one or more of the protected groups will be tolerated."
- Published
- 1985
- Full Text
- View/download PDF
20. Comment.
- Author
-
Schervish, Mark J.
- Subjects
- *
CALIBRATION , *FORECASTING , *RANKING (Statistics) , *MATHEMATICAL sequences , *PROBABILITY theory , *INDUCTION (Logic) , *UNCERTAINTY , *STATISTICS - Abstract
In this article, the author presents his views on a paper by David Oakes on statistical analysis. Oakes has shown that for each forecasting system, there exists a sequence of outcomes for which the system will not be calibrated. This implies that no forecasting system can be self-calibrating in the sense of A. P. Dawid. The situation with regard to the existence of calibrated forecasters is much more serious than either Dawid or Oakes makes it out to be. There are enough noncalibrable sequences so that noncalibrability is as much the normal state of affairs as calibrability. According to Oakes' theorem, the cardinality of the set of noncalibrable sequences is that of the continnum. There are at least two ways to interpret the preceding theorem and Oakes's result. The first is to claim that it is impossible to guarantee the existence of "empirically valid" (i.e., calibrated) sequential forecasts. A more reasonable interpretation is simply that these results are more evidence that (long run) calibration is not a useful measure of the goodness of forecasts.
- Published
- 1985
- Full Text
- View/download PDF
21. Rejoinder.
- Author
-
Genest, Christain and Schervish, Mark J.
- Subjects
- *
PARADOX , *STATISTICS - Abstract
The article reveals a reply on the resolution of the paradox of Vidyadhar Godambe in assessing statistical data.
- Published
- 1985
- Full Text
- View/download PDF
22. Multivariate Statistical Simulation (Book).
- Author
-
Schervish, Mark J.
- Subjects
- *
STATISTICS , *NONFICTION - Abstract
Reviews the book "Multivariate Statistical Simulation," by Mark E. Johnson.
- Published
- 1989
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.