1,079 results
Search Results
2. Modified XLindley distribution: Properties, estimation, and applications.
- Author
-
Gemeay, Ahmed M., Beghriche, Abdelfateh, Sapkota, Laxmi Prasad, Zeghdoudi, Halim, Makumi, Nicholas, Bakr, M. E., and Balogun, Oluwafemi Samson
- Subjects
PROBABILITY theory ,STOCHASTIC orders ,INFERENTIAL statistics ,DATA visualization ,STATISTICS ,HAZARD function (Statistics) ,GOODNESS-of-fit tests - Abstract
This article aims to introduce the inverse new XLindley distribution, a further extension of the new XLindley distribution. The article explores various properties of the proposed model, such as the quantile function, stochastic orders, entropies, fuzzy reliability, moments, and stress–strength estimation. The paper also compares different methods of estimating the parameters of the proposed model and evaluates their performance using a simulation study. Moreover, the paper demonstrates the usefulness of the proposed model by applying it to two real datasets. The article shows that the proposed model fits the data better than seven existing models based on model selection criteria, goodness-of-fit test statistics, and graphical visualizations. The paper concludes that the new model can be a valuable tool for modeling and analyzing hazard functions or survival data in various fields and contributing to probability theory and statistical inferences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. The Transform-Transformer Approach: Unveiling the Odd Transmuted Rayleigh-X Family of Distributions.
- Author
-
Abdullahi, J., Gulumbe, S. U., Usman, U., and Garba, A. I.
- Subjects
DISTRIBUTION (Probability theory) ,RAYLEIGH model ,CHARACTERISTIC functions ,PROBABILITY theory ,STATISTICS - Abstract
The paper presents a novel class (family) of statistical distributions termed Odd Transmuted Rayleigh-X (OTRX) that was created through a transform-transformer (T-X) approach. The CDF and PDF of the OTR-X family were derived. The available statistical literature studied earlier highlighted that almost all generalized distributions (in which one or more parameters were added) performed well and have better presentation of data than their counterparts with less number of parameters. This has motivated us to developed new family that is capable of producing new distributions. The research paper also presented a clear mathematical formula for several characteristics of the OTR-X family, such as the ordinary moments, moment generating, quantile, and reliability function. In order to find the estimate of the corresponding parameters of the OTR-X family, the technique of maximum likelihood is used in the study. A new sub-model Odd Transmuted Rayleigh Inversed Exponential Distribution (OTRIED) was generated from the OTR-X class and compared its performance to Transmuted Inversed Exponential Distribution (TIED), Exponential Inversed Exponential Distribution (EIED), and Inversed Exponential Distribution using two different datasets. The results have shown that the proposed distribution out performed its competitors when using two different real-world datasets. Furthermore, the proposed distribution can be practicalized to any skewed dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Bayesian Methods for Information Borrowing in Basket Trials: An Overview.
- Author
-
Zhou, Tianjian and Ji, Yuan
- Subjects
TUMOR treatment ,STATISTICS ,EXPERIMENTAL design ,CLINICAL trials ,CLINICAL medicine research ,DATA analysis ,STATISTICAL models ,DRUG development ,PROBABILITY theory - Abstract
Simple Summary: This paper provides a review of statistical methods for tumor-agnostic clinical trials. In particular, the review focuses on basket trials and provides methodological insights into various Bayesian approaches. The key concept of borrowing information through Bayesian hierarchical models is emphasized, and some novel trial designs are introduced. The review is expected to provide oncology and biostatistics researchers with more exposure to powerful Bayesian methods for the design and analysis of tumor-agnostic clinical trials. Basket trials allow simultaneous evaluation of a single therapy across multiple cancer types or subtypes of the same cancer. Since the same treatment is tested across all baskets, it may be desirable to borrow information across them to improve the statistical precision and power in estimating and detecting the treatment effects in different baskets. We review recent developments in Bayesian methods for the design and analysis of basket trials, focusing on the mechanism of information borrowing. We explain the common components of these methods, such as a prior model for the treatment effects that embodies an assumption of exchangeability. We also discuss the distinct features of these methods that lead to different degrees of borrowing. Through simulation studies, we demonstrate the impact of information borrowing on the operating characteristics of these methods and discuss its broader implications for drug development. Examples of basket trials are presented in both phase I and phase II settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Sojourn times of Gaussian and related random fields.
- Author
-
Dȩbicki, Krzysztof, Hashorva, Enkelejd, Peng Liu, and Michna, Zbigniew
- Subjects
PROBABILITY theory ,GAUSSIAN distribution ,CHI-squared test ,QUEUING theory ,STATISTICS - Abstract
This paper is concerned with the asymptotic analysis of sojourn times of random fields with continuous sample paths. Under a very general framework we show that there is an interesting relationship between tail asymptotics of sojourn times and that of supremum. Moreover, we establish the uniform double-sum method to derive the tail asymptotics of sojourn times. In the literature, based on the pioneering research of S. Berman the sojourn times have been utilised to derive the tail asymptotics of supremum of Gaussian processes. In this paper we show that the opposite direction is even more fruitful, namely knowing the asymptotics of supremum of random processes and fields (in particular Gaussian) it is possible to establish the asymptotics of their sojourn times. We illustrate our findings considering i) two dimensional Gaussian random fields, ii) chi-process generated by stationary Gaussian processes and iii) stationary Gaussian queueing processes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Linear Algorithms for Robust and Scalable Nonparametric Multiclass Probability Estimation.
- Author
-
LIYUN ZENG and HAO HELEN ZHANG
- Subjects
NONPARAMETRIC estimation ,CONDITIONAL probability ,SUPPORT vector machines ,POLYNOMIAL time algorithms ,STATISTICS ,PROBABILITY theory ,COMPUTATIONAL complexity ,POLYNOMIAL chaos - Abstract
Multiclass probability estimation is the problem of estimating conditional probabilities of a data point belonging to a class given its covariate information. It has broad applications in statistical analysis and data science. Recently a class of weighted Support Vector Machines (wSVMs) has been developed to estimate class probabilities through ensemble learning for K-class problems (Wu et al., 2010; Wang et al., 2019), where K is the number of classes. The estimators are robust and achieve high accuracy for probability estimation, but their learning is implemented through pairwise coupling, which demands polynomial time in K. In this paper, we propose two new learning schemes, the baseline learning and the One-vs-All (OVA) learning, to further improve wSVMs in terms of computational efficiency and estimation accuracy. In particular, the baseline learning has optimal computational complexity in the sense that it is linear in K. Though not the most efficient in computation, the OVA is found to have the best estimation accuracy among all the procedures under comparison. The resulting estimators are distribution-free and shown to be consistent. We further conduct extensive numerical experiments to demonstrate their finite sample performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. What is the proper way to apply the multiple comparison test?
- Author
-
Sangseok Lee and Dong Kyu Lee
- Subjects
HYPOTHESIS ,ERROR rates ,PROBABILITY theory ,STATISTICAL power analysis ,BONFERRONI correction ,ANALYSIS of variance - Abstract
Multiple comparisons tests (MCTs) are performed several times on the mean of experimental conditions. When the null hypothesis is rejected in a validation, MCTs are performed when certain experimental conditions have a statistically significant mean difference or there is a specific aspect between the group means. A problem occurs if the error rate increases while multiple hypothesis tests are performed simultaneously. Consequently, in an MCT, it is necessary to control the error rate to an appropriate level. In this paper, we discuss how to test multiple hypotheses simultaneously while limiting type I error rate, which is caused by a inflation. To choose the appropriate test, we must maintain the balance between statistical power and type I error rate. If the test is too conservative, a type I error is not likely to occur. However, concurrently, the test may have insufficient power resulted in increased probability of type II error occurrence. Most researchers may hope to find the best way of adjusting the type I error rate to discriminate the real differences between observed data without wasting too much statistical power. It is expected that this paper will help researchers understand the differences between MCTs and apply them appropriately. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. Stochastically Transitive Models for Pairwise Comparisons: Statistical and Computational Issues.
- Author
-
Shah, Nihar B., Balakrishnan, Sivaraman, Guntuboyina, Adityanand, and Wainwright, Martin J.
- Subjects
PAIRED comparisons (Mathematics) ,THRESHOLDING algorithms ,STOCHASTIC analysis ,PARAMETER estimation ,PROBABILITY theory - Abstract
There are various parametric models for analyzing pairwise comparison data, including the Bradley–Terry–Luce (BTL) and Thurstone models, but their reliance on strong parametric assumptions is limiting. In this paper, we study a flexible model for pairwise comparisons, under which the probabilities of outcomes are required only to satisfy a natural form of stochastic transitivity. This class includes parametric models, including the BTL and Thurstone models as special cases, but is considerably more general. We provide various examples of models in this broader stochastically transitive class for which classical parametric models provide poor fits. Despite this greater flexibility, we show that the matrix of probabilities can be estimated at the same rate as in standard parametric models up to logarithmic terms. On the other hand, unlike in the BTL and Thurstone models, computing the minimax-optimal estimator in the stochastically transitive model is non-trivial, and we explore various computationally tractable alternatives. We show that a simple singular value thresholding algorithm is statistically consistent but does not achieve the minimax rate. We then propose and study algorithms that achieve the minimax rate over interesting sub-classes of the full stochastically transitive class. We complement our theoretical results with thorough numerical simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
9. Inverse Probability Weighting to Estimate Exposure Effects on the Burden of Recurrent Outcomes in the Presence of Competing Events.
- Author
-
Gaber, Charles E, Edwards, Jessie K, Lund, Jennifer L, Peery, Anne F, Richardson, David B, and Kinlaw, Alan C
- Subjects
NONPARAMETRIC statistics ,STATISTICS ,SCIENTIFIC observation ,SIMULATION methods in education ,TREATMENT effectiveness ,CONCEPTUAL structures ,ATTRIBUTION (Social psychology) ,DATA analysis ,PROBABILITY theory - Abstract
Recurrent events—outcomes that an individual can experience repeatedly over the course of follow-up—are common in epidemiologic and health services research. Studies involving recurrent events often focus on time to first occurrence or on event rates, which assume constant hazards over time. In this paper, we contextualize recurrent event parameters of interest using counterfactual theory in a causal inference framework and describe an approach for estimating a target parameter referred to as the mean cumulative count. This approach leverages inverse probability weights to control measured confounding with an existing (and underutilized) nonparametric estimator of recurrent event burden first proposed by Dong et al. in 2015. We use simulations to demonstrate the unbiased estimation of the mean cumulative count using the weighted Dong-Yasui estimator in a variety of scenarios. The weighted Dong-Yasui estimator for the mean cumulative count allows researchers to use observational data to flexibly estimate and contrast the expected number of cumulative events experienced per individual by a given time point under different exposure regimens. We provide code to ease application of this method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Comparing the inversion statistic for distribution-biased and distribution-shifted permutations with the geometric and the GEM distributions.
- Author
-
Pinsky, Ross G.
- Subjects
STATISTICS ,GEOMETRIC distribution ,PERMUTATIONS ,PROBABILITY theory ,STOCHASTIC convergence - Abstract
Given a probability distribution p := {p
k }k=1 ∞ on the positive integers, there are two natural ways to construct a random permutation in Sn or a random permutation of N from IID samples from p. One is called the p-biased construction and the other the p-shifted construction. In the first part of the paper we consider the case that the distribution p is the geometric distribution with parameter 1 - q ∈ (0, 1). In this case, the p-shifted random permutation has the Mallows distribution with parameter q. Let Pn b;Geo(1-q) and Pn s;Geo(1-q) denote the biased and the shifted distributions on Sn . The expected number of inversions of a permutation under Pn s;Geo(1-q) is greater than under Pn b;Geo(1-q) , and under either of these distributions, a permutation tends to have many fewer inversions than it would have under the uniform distribution. For fixed n, both Pn b;Geo(1-q) and Pn s;Geo(1-q) converge weakly as q → 1 to the uniform distribution on Sn . We compare the biased and the shifted distributions by studying the inversion statistic under ... and ... for various rates of convergence of qn to 1. In the second part of the paper we consider p-biased and p-shifted permutations for the case that the distribution p is itself random and distributed as a GEM(θ)-distribution. In particular, in both the GEM(θ)-biased and the GEM(θ)-shifted cases, the expected number of inversions behaves asymptotically as it does under the Geo(1 - q)-shifted distribution with θ = q / 1-q . This allows one to consider the GEM(θ)-shifted case as the random counterpart of the Geo(q)-shifted case. We also consider another p-biased distribution with random p for which the expected number of inversions behaves asymptotically as it does under the Geo(1 - q)-biased case with θ and q as above, and with θ → ∞ and q → 1 [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
11. Reply to the letter of Katayev and Fleming.
- Author
-
Haeckel, Rainer and Wosniok, Werner
- Subjects
REFERENCE values ,STATISTICS ,CONFLICT (Psychology) ,DATA analysis ,RESEARCH bias ,ALGORITHMS ,PROBABILITY theory - Published
- 2022
- Full Text
- View/download PDF
12. ARDS after Pneumonectomy: How to Prevent It? Development of a Nomogram to Predict the Risk of ARDS after Pneumonectomy for Lung Cancer.
- Author
-
Mazzella, Antonio, Mohamed, Shehab, Maisonneuve, Patrick, Borri, Alessandro, Casiraghi, Monica, Bertolaccini, Luca, Petrella, Francesco, Lo Iacono, Giorgio, and Spaggiari, Lorenzo
- Subjects
C-reactive protein ,STATISTICS ,CONFIDENCE intervals ,MULTIVARIATE analysis ,LUNG tumors ,RETROSPECTIVE studies ,ACQUISITION of data ,SURGICAL complications ,ADULT respiratory distress syndrome ,RISK assessment ,MEDICAL records ,DESCRIPTIVE statistics ,LOGISTIC regression analysis ,ODDS ratio ,PNEUMONECTOMY ,PROBABILITY theory ,DISEASE risk factors - Abstract
Simple Summary: In the modern era, characterized by parenchymal-sparing procedures, in some cases pneumonectomy remains the only therapeutic approach to achieving oncological radicality. One of the most feared complications is undoubtedly respiratory failure and ARDS. Its cause after pneumonectomy is still unclear, and the study of risk factors is a subject of debate. In this paper, we evaluate the main risk factors for ARDS of a large cohort of patients and we classify them in four classes of growing risk in order to quantify their postoperative risk of ARDS and facilitate their global management. (1) Background: The cause of ARDS after pneumonectomy is still unclear, and the study of risk factors is a subject of debate. (2) Methods: We reviewed a large panel of pre-, peri- and postoperative data of 211 patients who underwent pneumonectomy during the period 2014–2021. Univariable and multivariable logistic regression was used to quantify the association between preoperative parameters and the risk of developing ARDS, in addition to odds ratios and their respective 95% confidence intervals. A backward stepwise selection approach was used to limit the number of variables in the final multivariable model to significant independent predictors of ARDS. A nomogram was constructed based on the results of the final multivariable model, making it possible to estimate the probability of developing ARDS. Statistical significance was defined by a two-tailed p-value < 0.05. (3) Results: Out of 211 patients (13.3%), 28 developed ARDS. In the univariate analysis, increasing age, Charlson Comorbidity Index and ASA scores, DLCO < 75% predicted, preoperative C-reactive protein (CRP), lung perfusion and duration of surgery were associated with ARDS; a significant increase in ARDS was also observed with decreasing VO2max level. Multivariable analysis confirmed the role of ASA score, DLCO < 75% predicted, preoperative C-reactive protein and lung perfusion. Using the nomogram, we classified patients into four classes with rates of ARDS ranking from 2.0% to 34.0%. (4) Conclusions: Classification in four classes of growing risk allows a correct preoperative stratification of these patients in order to quantify the postoperative risk of ARDS and facilitate their global management. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Pattern of Radiotherapy Treatment in Low-Risk, Intermediate-Risk, and High-Risk Prostate Cancer Patients: Analysis of National Cancer Database.
- Author
-
Agrawal, Rishabh, Dey, Asoke, Datta, Sujay, Nassar, Ana, Grubb, William, Traughber, Bryan, Biswas, Tithi, Ove, Roger, and Podder, Tarun
- Subjects
MORTALITY risk factors ,MORTALITY prevention ,STATISTICS ,MULTIVARIATE analysis ,CANCER patients ,RISK assessment ,SEX distribution ,KAPLAN-Meier estimator ,PROTON therapy ,PROSTATE tumors ,PROBABILITY theory - Abstract
Simple Summary: Prostate cancer (PCa) is the most common cancer and the second leading cause of cancer-related mortality among males in the US. Definitive radiation therapy (RT) plays an important role in curative-intent treatment for localized PCa and can be delivered with several different techniques, depending on the availability of resources and patient-specific criteria. With an analysis of the extensive National Cancer Database, this paper investigates trends in utilization, survival probability, and factors associated with overall survival of six common RT modalities utilized for the treatment of PCa patients—stratified by the three risk groups. Background: In this study, the utilization rates and survival outcomes of different radiotherapy techniques are compared in prostate cancer (PCa) patients stratified by risk group. Methods: We analyzed an extensive data set of N0, M0, non-surgical PCa patients diagnosed between 2004 and 2015 from the National Cancer Database (NCDB). Patients were grouped into six categories based on RT modality: an intensity-modulated radiation therapy (IMRT) group with brachytherapy (BT) boost, IMRT with/without IMRT boost, proton therapy, stereotactic body radiation therapy (SBRT), low-dose-rate brachytherapy (BT LDR), and high-dose-rate brachytherapy (BT HDR). Patients were also stratified by the National Comprehensive Cancer Network (NCCN) guidelines: low-risk (clinical stage T1–T2a, Gleason Score (GS) ≤ 6, and Prostate-Specific Antigen (PSA) < 10), intermediate-risk (clinical stage T2b or T2c, GS of 7, or PSA of 10–20), and high-risk (clinical stage T3–T4, or GS of 8–10, or PSA > 20). Overall survival (OS) probability was determined using a Kaplan–Meier estimator. Univariate and multivariate analyses were performed by risk group for the six treatment modalities. Results: The most utilized treatment modality for all PCa patients was IMRT (53.1%). Over the years, a steady increase in SBRT utilization was observed, whereas BT HDR usage declined. IMRT-treated patient groups exhibited relatively lower survival probability in all risk categories. A slightly better survival probability was observed for the proton therapy group. Hormonal therapy was used for a large number of patients in all risk groups. Conclusion: This study revealed that IMRT was the most common treatment modality for PCa patients. Brachytherapy, SBRT, and IMRT+BT exhibited similar survival rates, whereas proton showed slightly better overall survival across the three risk groups. However, analysis of the demographics indicates that these differences are at least in part due to selection bias. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. SHORT AND PROLONGED FASTING PRIOR TO THE PERFORMANCE OF TRACHEOSTOMIES IN INTENSIVE THERAPY: A RETROSPECTIVE STUDY.
- Author
-
Gonzalo Duran, Lucas, Emilia Beilman, María, Natali Quiroga, Araceli, Cruz, Magdalena, Vanesa Millan, Alejandra, Johanna Ojeda, Micaela, Ciccioli, Fabiana, Montenegro Fernandez, Micaela Giselle, Monrroy Miro, Wendy Estefany, Trinidad Malisia, Valentina, Antonio Grassi, Nicolas, Zelaya De Leon, Nazareno Iñaki, Ezequiel Espinoza, Franco, Otamendi, Marina, Zorzano Osinalde, Paula, and Petasny, Marcos
- Subjects
MORTALITY risk factors ,RISK factors of pneumonia ,PNEUMONIA diagnosis ,PREPROCEDURAL fasting ,TRACHEOTOMY ,RISK assessment ,PNEUMONIA ,T-test (Statistics) ,PATIENTS ,FISHER exact test ,PROBABILITY theory ,QUESTIONNAIRES ,LOGISTIC regression analysis ,COMPUTED tomography ,TREATMENT duration ,RETROSPECTIVE studies ,MANN Whitney U Test ,DESCRIPTIVE statistics ,CHI-squared test ,RESPIRATORY diseases ,LONGITUDINAL method ,ODDS ratio ,ARTIFICIAL respiration ,INTENSIVE care units ,STATISTICS ,MEDICAL records ,ACQUISITION of data ,LENGTH of stay in hospitals ,CONFIDENCE intervals ,DATA analysis software ,COMPARATIVE studies ,MECHANICAL ventilators - Abstract
Copyright of Revista de la Facultad de Medicina Humana is the property of Instituto de Investigaciones en Ciencias Biomedicas de la Universidad Ricardo Palma and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
15. Maximum Entropy Technique and Regularization Functional for Determining the Pharmacokinetic Parameters in DCE-MRI.
- Author
-
Amini Farsani, Zahra and Schmid, Volker J
- Subjects
ARTERIAL physiology ,LEFT heart ventricle ,STATISTICS ,PHYSICS ,NOISE ,CONTRAST media ,MAGNETIC resonance imaging ,MACHINE learning ,UNCERTAINTY ,DATABASE management ,COMPUTED tomography ,DATA analysis ,ALGORITHMS ,PROBABILITY theory ,BREAST tumors - Abstract
This paper aims to solve the arterial input function (AIF) determination in dynamic contrast-enhanced MRI (DCE-MRI), an important linear ill-posed inverse problem, using the maximum entropy technique (MET) and regularization functionals. In addition, estimating the pharmacokinetic parameters from a DCE-MR image investigations is an urgent need to obtain the precise information about the AIF–the concentration of the contrast agent on the left ventricular blood pool measured over time. For this reason, the main idea is to show how to find a unique solution of linear system of equations generally in the form of y = A x + b , named an ill-conditioned linear system of equations after discretization of the integral equations, which appear in different tomographic image restoration and reconstruction issues. Here, a new algorithm is described to estimate an appropriate probability distribution function for AIF according to the MET and regularization functionals for the contrast agent concentration when applying Bayesian estimation approach to estimate two different pharmacokinetic parameters. Moreover, by using the proposed approach when analyzing simulated and real datasets of the breast tumors according to pharmacokinetic factors, it indicates that using Bayesian inference—that infer the uncertainties of the computed solutions, and specific knowledge of the noise and errors—combined with the regularization functional of the maximum entropy problem, improved the convergence behavior and led to more consistent morphological and functional statistics and results. Finally, in comparison to the proposed exponential distribution based on MET and Newton's method, or Weibull distribution via the MET and teaching–learning-based optimization (MET/TLBO) in the previous studies, the family of Gamma and Erlang distributions estimated by the new algorithm are more appropriate and robust AIFs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. A comprehensive modelling approach to estimate the transmissibility of coronavirus and its variants from infected subjects in indoor environments.
- Author
-
Anand, S., Krishan, Jayant, Sreekanth, B., and Mayya, Y. S.
- Subjects
SARS-CoV-2 ,VIRAL load ,EPISTEMIC uncertainty ,STATISTICS ,AEROSOLS ,PROBABILITY theory ,ESTIMATES - Abstract
A central issue in assessing the airborne risk of COVID-19 infections in indoor spaces pertains to linking the viral load in infected subjects to the lung deposition probability in exposed individuals through comprehensive aerosol dynamics modelling. In this paper, we achieve this by combining aerosol processes (evaporation, dispersion, settling, lung deposition) with a novel double Poisson model to estimate the probability that at least one carrier particle containing at least one virion will be deposited in the lungs and infect a susceptible individual. Multiple emission scenarios are considered. Unlike the hitherto used single Poisson models, the double Poisson model accounts for fluctuations in the number of carrier particles deposited in the lung in addition to the fluctuations in the virion number per carrier particle. The model demonstrates that the risk of infection for 10-min indoor exposure increases from 1 to 50% as the viral load in the droplets ejected from the infected subject increases from 2 × 10
8 to 2 × 1010 RNA copies/mL. Being based on well-established aerosol science and statistical principles, the present approach puts airborne risk assessment methodology on a sound formalistic footing, thereby reducing avoidable epistemic uncertainties in estimating relative transmissibilities of different coronavirus variants quantified by different viral loads. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
17. Comparative analysis of the Cancer Council of Victoria and the online Commonwealth Scientific and Industrial Research Organisation FFQ.
- Author
-
Gardener, Samantha L., Rainey-Smith, Stephanie R., Macaulay, S. Lance, Taddei, Kevin, Rembach, Alan, Maruff, Paul, Ellis, Kathryn A., Masters, Colin L., Rowe, Christopher C., Ames, David, Keogh, Jennifer B., and Martins, Ralph N.
- Subjects
CHI-squared test ,CONFIDENCE intervals ,STATISTICAL correlation ,DIET ,DIETARY fiber ,CARBOHYDRATE content of food ,FAT content of food ,SODIUM content of food ,INGESTION ,NUTRITIONAL assessment ,PROBABILITY theory ,PSYCHOMETRICS ,QUESTIONNAIRES ,REGRESSION analysis ,RESEARCH funding ,STATISTICS ,SURVEYS ,T-test (Statistics) ,TUMORS ,UNSATURATED fatty acids ,RESEARCH methodology evaluation ,FOOD diaries ,DATA analysis software ,DESCRIPTIVE statistics ,NUTRIENT density - Abstract
FFQ are commonly used to examine the association between diet and disease. They are the most practical method for usual dietary data collection as they are relatively inexpensive and easy to administer. In Australia, the Cancer Council of Victoria FFQ (CCVFFQ) version 2 and the online Commonwealth Scientific and Industrial Research Organisation FFQ (CSIROFFQ) are used. The aim of our study was to establish the level of agreement between nutrient intakes captured using the online CSIROFFQ and the paper-based CCVFFQ. The CCVFFQ and the online CSIROFFQ were completed by 136 healthy participants. FFQ responses were analysed to give g per d intake of a range of nutrients. Agreement between twenty-six nutrient intakes common to both FFQ was measured by a variety of methods. Nutrient intake levels that were significantly correlated between the two FFQ were carbohydrates, total fat, Na and MUFA. When assessing ranking of nutrients into quintiles, on average, 56 % of the participants (for all nutrients) were classified into the same or adjacent quintiles in both FFQ, with the highest percentage agreement for sugar. On average, 21 % of participants were grossly misclassified by three or four quintiles, with the highest percentage misclassification for fibre and Fe. Quintile agreement was similar to that reported by other studies, and we concluded that both FFQ are suitable tools for dividing participants’ nutrient intake levels into high- and low-consumption groups. Use of either FFQ was not appropriate for obtaining accurate estimates of absolute nutrient intakes. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
18. Track-to-Track Fusion Using Inside Information From Local IMM Estimators.
- Author
-
VISINA, RADU, BAR-SHALOM, YAAKOV, WILLETT, PETER, and DEY, DIPAK K.
- Subjects
STATISTICS ,MULTISENSOR data fusion ,PROBABILITY theory ,DATA fusion (Statistics) ,DATABASES - Abstract
A novel approach to the track-to-track fusion (T2TF) of state estimates from interacting multiple-model (IMM) estimators using inside information [mode-conditioned estimates (MCEs) and mode probabilities] is described in this paper. Fusion is performed on-demand, i.e., without conditioning on past track data. The local trackers run IMM estimators to track a maneuvering target with switching process noise and they transmitMCEs andmode probabilities to a fusion center. The fused state posterior probability density is a Gaussian mixture, where the parameters of the required likelihood functions can be computed recursively. Mode probabilities are fused by transforming them to logratios and using them as statistical information in the likelihood function of themode. This results in consistent data fusion based on known target and local tracker (IMM) parameters. Simulations show that this method outperforms the fusion of the local IMM estimator Gaussianapproximated outputs both in terms of error during target maneuvers and in terms of the consistency of the mean-squared error (MSE). It is a generalization of Gaussian T2TF with crosscovariance, and its performance is close to that of centralized measurement fusion (CMF)--by accounting for the error and log-ratio crosscovariances, the fused covariance consistency matches the ideal consistency of CMF without requiring memory of past fused tracks. The method is also shown to be more accurate, informative, consistent in MSE, and of lower computational and communication cost than Chernoff fusion, a recently published method for Gaussian mixture fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2020
19. Dynamic Risk Evaluation and Early Warning of Crest Cracking for High Earth-Rockfill Dams through Bayesian Parameter Updating.
- Author
-
Wang, Yongfei, Li, Junru, Wu, Zhenyu, Chen, Jiankang, Yin, Chuan, and Bian, Kang
- Subjects
RISK assessment ,DAMS ,TIME perception ,DAM failures ,STATISTICS ,PROBABILITY theory - Abstract
Crest cracking is one of the most common damage types for high earth-rockfill dams. Cracking risk of dam crest is closely related to the duration of abnormal deformation state. In this paper, a methodology for dynamic risk evaluation and early warning of crest cracking for high earth-rockfill dams is proposed and mainly consists of: (a) The discrimination of abnormal deformation state related to crest cracking, which is implemented by comparing the crest settlement inclination with the threshold value. (b) Computation of crest cracking probability and estimation of cracking time. The exponential distribution is adopted to represent the probability distribution of the duration T
AS of abnormal state before crest cracking. Then the crest cracking probability in a given time can be computed by integration with respect to TAS . Inversely, the cracking time corresponding to a given probability can be estimated. (c) Determination of the values of probability adopted to early warn crest cracking, which are suggested to be selected by statistical analysis of the calculated probabilities at the observed cracking times. (d) Bayesian estimation and updating of probability distribution of the parameter λ in the PDF of TAS , according to observed durations of abnormal state before crest cracking. The methodology is illustrated and verified by the case study for an actual earth-rockfill dam, of which crest cracking and recracking events were observed during the periods of high reservoir level. According to the observed values of TAS , the probability distribution for λ is progressively updated and the dispersion of the distributions of λ gradually decreases. The crest cracking probability increases with the duration of abnormal state and the width of confidence interval of the estimated cracking probability progressively contracts with the updating of the distribution for λ. Finally, the early warning of crest cracking for the dam is investigated by estimating the lower limit of cracking time. It is shown that early warning of crest cracking can be issued from at least 20 days ahead of the occurrence of crest cracking event. The idea of using duration of abnormal state of crest settlement to evaluate crest cracking risk of the earth-rockfill dam in this paper may be applicable to other dams. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
20. A non-parametric significance test to compare corpora.
- Author
-
Koplenig, Alexander
- Subjects
STATISTICAL hypothesis testing ,NULL hypothesis ,CORPORA ,PROBABILITY theory ,STATISTICS - Abstract
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. AdequacyModel: An R package for probability distributions and general purpose optimization.
- Author
-
Marinho, Pedro Rafael D., Silva, Rodrigo B., Bourguignon, Marcelo, Cordeiro, Gauss M., and Nadarajah, Saralees
- Subjects
MINIMUM variance estimation ,MATHEMATICAL statistics ,PARTICLE swarm optimization ,COMPUTATIONAL statistics ,MATHEMATICAL optimization ,MONTE Carlo method ,PROBABILITY theory - Abstract
Several lifetime distributions have played an important role to fit survival data. However, for some of these models, the computation of maximum likelihood estimators is quite difficult due to presence of flat regions in the search space, among other factors. Several well-known derivative-based optimization tools are unsuitable for obtaining such estimates. To circumvent this problem, we introduce the AdequacyModel computational library version 2.0.0 for the R statistical environment with two major contributions: a general optimization technique based on the Particle Swarm Optimization (PSO) method (with a minor modification of the original algorithm) and a set of statistical measures for assessment of the adequacy of the fitted model. This library is very useful for researchers in probability and statistics and has been cited in various papers in these areas. It serves as the basis for the Newdistns library (version 2.1) published in an impact journal in the area of computational statistics, see . It is also the basis of the Wrapped library (version 2.0), see . A third package making use of the AdequacyModel library can be found in . In addition, the proposed library has proved to be very useful for maximizing log-likelihood functions with complex search regions. The library provides a greater control of the optimization process by introducing a stop criterion based on a minimum number of iterations and the variance of a given proportion of optimal values. We emphasize that the new library can be used not only in statistics but in physics and mathematics as proved in several examples throughout the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Multilevel analyses of on-demand medication data, with an application to the treatment of Female Sexual Interest/Arousal Disorder.
- Author
-
Kessels, Rob, Bloemers, Jos, Tuiten, Adriaan, and van der Heijden, Peter G. M.
- Subjects
MULTILEVEL models ,PHARMACOLOGY ,LUST ,DRUG efficacy ,DRUGS - Abstract
Data from clinical trials investigating on-demand medication often consist of an intentionally varying number of measurements per patient. These measurements are often observations of discrete events of when the medication was taken, including for example data on symptom severity. In addition to the varying number of observations between patients, the data have another important feature: they are characterized by a hierarchical structure in which the events are nested within patients. Traditionally, the observed events of patients are aggregated into means and subsequently analyzed using, for example, a repeated measures ANOVA. This procedure has drawbacks. One drawback is that these patient means have different standard errors, first, because the variance of the underlying events differs between patients and second, because the number of events per patient differs. In this paper, we argue that such data should be analyzed by applying a multilevel analysis using the individual observed events as separate nested observations. Such a multilevel approach handles this drawback and it also enables the examination of varying drug effects across patients by estimating random effects. We show how multilevel analyses can be applied to on-demand medication data from a clinical trial investigating the efficacy of a drug for women with low sexual desire. We also explore linear and quadratic time effects that can only be performed when the individual events are considered as separate observations and we discuss several important statistical topics relevant for multilevel modeling. Taken together, the use of a multilevel approach considering events as nested observations in these types of data is advocated as it is more valid and provides more information than other (traditional) methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Measures of Extropy for Concomitants of Generalized Order Statistics in Morgenstern Family.
- Author
-
Almaspoor, Zahra, Jafari, Ali Akbar, and Tahmasebi, Saeid
- Subjects
STATISTICS ,MATHEMATICAL variables ,PROBABILITY theory ,MATHEMATICAL functions ,BIVARIATE analysis - Abstract
In this paper, a measure of extropy is obtained for concomitants of m-generalized order statistics in the Morgenstern family. The cumulative residual extropy (CREX) and negative cumulative extropy (NCEX) are presented for the rth concomitant of m-generalized order statistics. In addition, the problem of estimating the CREX and NCEX is studied utilizing the empirical method in concomitants of m-generalized order statistics. Some applications of these results are given for the concomitants of order statistics and record values. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. The Ideological and Political Education Model of College Students Based on Probability Theory and Statistics
- Author
-
Xuan Wangwei
- Subjects
ideological and political education ,education portrait of college students ,probability theory ,statistics ,hierarchical clustering ,word cloud ,91c20 ,Mathematics ,QA1-939 - Abstract
This article uses the theory of probability and statistics to evaluate the thinking dynamics of college students to understand the psychological state of college students. First, the paper uses a web crawler to crawl and analyze the official micro articles. Using the method of probability statistics and the K-mean clustering method, we can understand the psychological state of college students in real time. The results of this experiment show that the current hot topics can be obtained within a certain period by using the statistical method of vocabulary display and clustering. The purpose of this paper is to propose corresponding countermeasures and approaches for the ideological and political work of college graduates. This model has a positive effect on cultivating college students’ values and ways of thinking.
- Published
- 2023
- Full Text
- View/download PDF
25. What’s left after the hype? An empirical approach comparing the distributional properties of traditional and virtual currency exchange rates.
- Author
-
Hempfing, Alexander
- Subjects
FOREIGN exchange ,ELECTRONIC money ,FOREIGN exchange rates ,LAPLACE distribution ,MONEY supply - Abstract
This paper provides an empirical analysis of the distributional properties and statistical regularities of virtual, intra-virtual and traditional currency exchange rates. To perform the analysis, the most relevant virtual, intra-virtual and foreign currency exchange rates between October 2015 and December 2018 are examined. The analysis shows that, in spite of their differing mode of formation, daily log-returns of all currency types share tent-shaped empirical densities, one of the characteristics of a Laplace distribution at semi-log scale. This peculiar property has also been examined thoroughly in other fields of economic literature. Moreover, the empirical results show that virtual and traditional currencies hold the same functional form, even after the 2018 hype. However, in spite of these similarities virtual and intra-virtual currencies display fatter tails and steeper towering peaks than regular foreign currencies which underscores the rather speculative nature of this asset class. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Drug sensitivity prediction with high-dimensional mixture regression.
- Author
-
Li, Qianyun, Shi, Runmin, and Liang, Faming
- Subjects
DRUG analysis ,FEATURE selection ,REGRESSION analysis ,RANDOM forest algorithms ,PREDICTION models - Abstract
This paper proposes a mixture regression model-based method for drug sensitivity prediction. The proposed method explicitly addresses two fundamental issues in drug sensitivity prediction, namely, population heterogeneity and feature selection pertaining to each of the subpopulations. The mixture regression model is estimated using the imputation-conditional consistency algorithm, and the resulting estimator is consistent. This paper also proposes an average-BIC criterion for determining the number of components for the mixture regression model. The proposed method is applied to the CCLE dataset, and the numerical results indicate that the proposed method can make a drastic improvement over the existing ones, such as random forest, support vector regression, and regularized linear regression, in both drug sensitivity prediction and feature selection. The p-values for the comparisons in drug sensitivity prediction can reach the order O(10
−8 ) or lower for the drugs with heterogeneous populations. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
27. Non-significant p-values? Strategies to understand and better determine the importance of effects and interactions in logistic regression.
- Author
-
Vakhitova, Zarina I. and Alston-Knox, Clair L.
- Subjects
BAYESIAN analysis ,LOGISTIC regression analysis ,SIMULATION methods & models ,CRIME victims ,COMPUTER crimes - Abstract
In the context of generalized linear models (GLMs), interactions are automatically induced on the natural scale of the data. The conventional approach to measuring effects in GLMs based on significance testing (e.g. the Wald test or using deviance to assess model fit) is not always appropriate. The objective of this paper is to demonstrate the limitations of these conventional approaches and to explore alternative strategies for determining the importance of effects. The paper compares four approaches to determining the importance of effects in the GLM using 1) the Wald statistic, 2) change in deviance (model fitting criteria), 3) Bayesian GLM using vaguely informative priors and 4) Bayesian Model Averaging analysis. The main points in this paper are illustrated using an example study, which examines the risk factors for cyber abuse victimization, and are further examined using a simulation study. Analysis of our example dataset shows that, in terms of a logistic GLM, the conventional methods using the Wald test and the change in deviance can produce results that are difficult to interpret; Bayesian analysis of GLM is a suitable alternative, which is enhanced with prior knowledge about the direction of the effects; and Bayesian Model Averaging (BMA) is especially suited for new areas of research, particularly in the absence of theory. We recommend that social scientists consider including BMA in their standard toolbox for analysis of GLMs. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. A multi-event combination maintenance model based on event correlation.
- Author
-
Guo, Chunhui, Lyu, Chuan, Chen, Jiayu, and Zhou, Dong
- Subjects
MAINTENANCE costs ,PARTICLE swarm optimization ,DECISION making ,IMAGE segmentation ,SYSTEM downtime - Abstract
Due to the complexity of large production systems, maintenance events are diverse, simultaneous and dynamic. Appropriate maintenance management of complex large production systems can guarantee high availability and save maintenance costs. However, current maintenance decision-making methods mainly focus on the maintenance events of single-components and series connection multi-components; little research pays attention to the combination maintenance of different maintenance events. Therefore, this paper proposes a multi-event combination maintenance model based on event correlation. First, the maintenance downtime and cost of three types of maintenance events under different maintenance beginning times and degrees are analysed. Then, shared maintenance downtime and cost models are established by maintenance event correlations. In addition, a multi-event combination maintenance model is constructed to achieve the goal of the highest availability and the lowest cost rate in both the decision-making cycle and the remaining life. Moreover, a particle swarm optimization algorithm based on interval segmentation for model solving is designed. Finally, a numerical example is presented to illustrate the model. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. An empirical analysis of post-work grocery shopping activity duration using modified accelerated failure time model to differentiate time-dependent and time-independent covariates.
- Author
-
Wang, Ke, Ye, Xin, and Ma, Jie
- Subjects
GROCERY shopping ,CONTINUOUS time models ,PROPORTIONAL hazards models ,DATA extraction ,POPULATION biology - Abstract
In this paper, the accelerated failure time (AFT) model is modified to analyze post-work grocery shopping activity duration. Much previous shopping duration analysis was conducted using the proportional hazard (PH) modeling approach. Once the proportionality assumption was violated, the traditional accelerated failure time (TAFT) model was usually selected as an alternative modeling approach. However, a TAFT model only has covariates with non-proportional and time-dependent effects on the hazard overtime while a PH model only accommodates covariates with proportional and time-independent effects. Neither of them considers the possibility that some of covariates may have proportional and time-independent effects and some may have non-proportional and time-dependent effects on the hazard value in one model. To address this issue, the paper generalizes the TAFT model and develops a modified accelerated failure time (MAFT) model to accommodate both time-dependent and time-independent covariates for activity duration analysis. Checking on the proportionality assumption indicates that the assumption is not valid in the post-work grocery shopping activity data extracted from the 2017 National Household Travel Survey (NHTS) conducted by the U.S. Department of Transportation (USDOT). Both TAFT and MAFT models are developed for comparisons and analysis. The empirical and statistical results show that there do exist two different types of covariates affecting shopping activity duration, including covariates only with proportional and time-independent effects (i.e. working duration, commute travel time) and those with non-proportional and time-dependent effects. The MAFT model can capture the subtleties in various types of covariate effects and help better understand how those covariates affect activity duration overtime. This paper also shows the importance to develop a flexible duration model with both time-dependent and time-independent covariates for accurately evaluating travel demand management (TDM) policies, like flexible work hours. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Automatic detection and classification of manufacturing defects in metal boxes using deep neural networks.
- Author
-
Essid, Oumayma, Laga, Hamid, and Samir, Chafik
- Subjects
ARTIFICIAL neural networks ,CLASSIFICATION algorithms ,COMPUTER vision ,COMPUTATIONAL complexity ,SUPPORT vector machines - Abstract
This paper develops a new machine vision framework for efficient detection and classification of manufacturing defects in metal boxes. Previous techniques, which are based on either visual inspection or on hand-crafted features, are both inaccurate and time consuming. In this paper, we show that by using autoencoder deep neural network (DNN) architecture, we are able to not only classify manufacturing defects, but also localize them with high accuracy. Compared to traditional techniques, DNNs are able to learn, in a supervised manner, the visual features that achieve the best performance. Our experiments on a database of real images demonstrate that our approach overcomes the state-of-the-art while remaining computationally competitive. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Approximate parameter inference in systems biology using gradient matching: a comparative evaluation.
- Author
-
Macdonald, Benn, Niu, Mu, Rogers, Simon, Filippone, Maurizio, and Husmeier, Dirk
- Subjects
SYSTEMS biology ,ORDINARY differential equations ,GAUSSIAN processes ,HILBERT space ,BIOLOGICAL systems ,COMPARATIVE studies ,RESEARCH methodology ,MEDICAL cooperation ,PROBABILITY theory ,RESEARCH ,STATISTICS ,BIOINFORMATICS ,EVALUATION research - Abstract
Background: A challenging problem in current systems biology is that of parameter inference in biological pathways expressed as coupled ordinary differential equations (ODEs). Conventional methods that repeatedly numerically solve the ODEs have large associated computational costs. Aimed at reducing this cost, new concepts using gradient matching have been proposed, which bypass the need for numerical integration. This paper presents a recently established adaptive gradient matching approach, using Gaussian processes (GPs), combined with a parallel tempering scheme, and conducts a comparative evaluation with current state-of-the-art methods used for parameter inference in ODEs. Among these contemporary methods is a technique based on reproducing kernel Hilbert spaces (RKHS). This has previously shown promising results for parameter estimation, but under lax experimental settings. We look at a range of scenarios to test the robustness of this method. We also change the approach of inferring the penalty parameter from AIC to cross validation to improve the stability of the method.Methods: Methodology for the recently proposed adaptive gradient matching method using GPs, upon which we build our new method, is provided. Details of a competing method using RKHS are also described here.Results: We conduct a comparative analysis for the methods described in this paper, using two benchmark ODE systems. The analyses are repeated under different experimental settings, to observe the sensitivity of the techniques.Conclusions: Our study reveals that for known noise variance, our proposed method based on GPs and parallel tempering achieves overall the best performance. When the noise variance is unknown, the RKHS method proves to be more robust. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
32. Semiparametric likelihood inference for left-truncated and right-censored data.
- Author
-
CHIUNG-YU HUANG, JING NING, JING QIN, Huang, Chiung-Yu, Ning, Jing, and Qin, Jing
- Subjects
CENSORING (Statistics) ,NONPARAMETRIC estimation ,BIOMETRY ,DECONVOLUTION (Mathematics) ,EXPECTATION-maximization algorithms ,GOODNESS-of-fit tests ,AGING ,EPIDEMIOLOGICAL research ,EXPERIMENTAL design ,PROBABILITY theory ,RESEARCH funding ,STATISTICS ,DATA analysis ,RESEARCH bias ,STATISTICAL models - Abstract
This paper proposes a new estimation procedure for the survival time distribution with left-truncated and right-censored data, where the distribution of the truncation time is known up to a finite-dimensional parameter vector. The paper expands on the Vardis multiplicative censoring model (Vardi, 1989. Multiplicative censoring, renewal processes, deconvolution and decreasing density: non-parametric estimation. Biometrika 76: , 751-761), establishes the connection between the likelihood under a generalized multiplicative censoring model and that for left-truncated and right-censored survival time data, and derives an Expectation-Maximization algorithm for model estimation. A formal test for checking the truncation time distribution is constructed based on the semiparametric likelihood ratio test statistic. In particular, testing the stationarity assumption that the underlying truncation time is uniformly distributed is performed by embedding the null uniform truncation time distribution in a smooth alternative (Neyman, 1937. Smooth test for goodness of fit. Skandinavisk Aktuarietidskrift 20: , 150-199). Asymptotic properties of the proposed estimator are established. Simulations are performed to evaluate the finite-sample performance of the proposed methods. The methods and theories are illustrated by analyzing the Canadian Study of Health and Aging and the Channing House data, where the stationarity assumption with respect to disease incidence holds for the former but not the latter. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
33. Probabilistic vs deterministic forecasts – interpreting skill statistics for the benefit of users.
- Author
-
Landman, Willem A., Tadross, Mark, Archer, Emma, and Johnston, Peter
- Subjects
- *
STATISTICS , *FORECASTING , *SEASONS , *PROBABILITY theory - Abstract
Owing to probabilistic uncertainties associated with seasonal forecasts, especially over areas such as southern Africa where forecast skill is limited, non-climatologists and users of such forecasts frequently prefer them to be presented or distributed in terms of the likelihood (expressed as a probability) of certain categories occurring or thresholds being exceeded. Probabilistic forecast verification is needed to verify such forecasts. Whilst the resulting verification statistics can provide clear insights into forecast attributes, they are often difficult to understand, which might hinder forecast uptake and use. This problem can be addressed by issuing forecasts with some understandable evidence of skill, with the purpose of reflecting how similar forecasts may have performed in the past. In this paper, we present a range of different probabilistic forecast verification scores, and determine if these statistics can be readily compared to more commonly known and understood ‘ordinary’ correlations between forecasts and their associated observations – assuming that ordinary correlations are more intuitively understood and informative to seasonal forecast users. Of the range of scores considered, the relative operating characteristics (ROC) was found to be the most intrinsically similar to correlation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Plithogenic Probability & Statistics are generalizations of MultiVariate Probability & Statistics.
- Author
-
Smarandache, Florentin
- Subjects
RANDOM variables ,DISTRIBUTION (Probability theory) ,STATISTICS ,GENERALIZATION ,PROBABILITY theory - Abstract
In this paper we exemplify the types of Plithogenic Probability and respectively Plithogenic Statistics. Several applications are given. The Plithogenic Probability of an event to occur is composed from the chances that the event occurs with respect to all random variables (parameters) that determine it. Each such a variable is described by a Probability Distribution (Density) Function, which may be a classical, (T,I,F)-neutrosophic, I-neutrosophic, (T,F)-intuitionistic fuzzy, (T,N,F)-picture fuzzy, (T,N,F)-spherical fuzzy, or (other fuzzy extension) distribution function. The Plithogenic Probability is a generalization of the classical MultiVariate Probability. The analysis of the events described by the plithogenic probability is the Plithogenic Statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
35. Comment on: "Confidence Intervals for Nonparametric Empirical Bayes Analysis" by Ignatiadis and Wager.
- Author
-
Imbens, Guido
- Subjects
EMPIRICAL Bayes methods ,VALUE-added assessment (Education) ,CLINICAL drug trials ,STATISTICS ,PROBABILITY theory - Abstract
In these cases getting accurate confidence intervals is of first order importance, and the methods Ignatiadis and Wager develop are likely to be useful. I want to congratulate Nikolaos Ignatiadis and Stefan Wager on a very stimulating paper on a timely and important topic. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
36. The Trojan Lifetime Champions Health Survey: Development, Validity, and Reliability.
- Author
-
Sikka, Robby, Fetzer, Gary, Hunkele, Thomas, Sugarman, Eric, and Boyd, Joel
- Subjects
QUALITY of life ,EXERCISE ,CHI-squared test ,COLLEGE athletes ,CONFIDENCE intervals ,EXERCISE physiology ,EXPERIMENTAL design ,HEALTH attitudes ,HEALTH behavior ,HEALTH status indicators ,RESEARCH methodology ,PROBABILITY theory ,QUESTIONNAIRES ,RESEARCH evaluation ,SELF-evaluation ,STATISTICAL hypothesis testing ,STATISTICS ,T-test (Statistics) ,STATISTICAL reliability ,CONTINUING education units ,INTER-observer reliability ,MULTITRAIT multimethod techniques ,ELITE athletes ,RESEARCH methodology evaluation ,DATA analysis software ,DESCRIPTIVE statistics ,ONE-way analysis of variance - Abstract
Context: Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. Objective: To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Design: Descriptive laboratory study. Setting: A large National Collegiate Athletic Association Division I university. Patients or Other Participants: A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Intervention(s): Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Main Outcome Measure(s): Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Results: Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and K agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and K, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic testretest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. Conclusions: These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
37. Updating Autonomous Underwater Vehicle Risk Based on the Effectiveness of Failure Prevention and Correction.
- Author
-
Brito, Mario P. and Griffiths, Gwyn
- Subjects
AUTONOMOUS underwater vehicles ,RISK assessment ,UNDERWATER equipment ,EMERGENCY management ,PROBABILITY theory - Abstract
Autonomous underwater vehicles (AUVs) have proven to be feasible platforms for marine observations. Risk and reliability studies on the performance of these vehicles by different groups show a significant difference in reliability, with the observation that the outcomes depend on whether the vehicles are operated by developers or nondevelopers. This paper shows that this difference in reliability is due to the failure prevention and correction procedures--risk mitigation--put in place by developers. However, no formalization has been developed for updating the risk profile based on the expected effectiveness of the failure prevention and correction process. A generic Bayesian approach for updating the risk profile is presented, based on the probability of failure prevention and correction and the number of subsequent deployments on which the failure does not occur. The approach, which applies whether the risk profile is captured in a parametric or nonparametric survival model, is applied to a real case study of the International Submarine Engineering Ltd. (ISE) Explorer AUV. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Adaptive probability hypothesis density filter for multi-target tracking with unknown measurement noise statistics.
- Author
-
Xu, Weijun
- Subjects
RANDOM noise theory ,PROBABILITY theory ,STATISTICS ,GAUSSIAN distribution ,DYNAMICAL systems - Abstract
Under the Gaussian noise assumption, the probability hypothesis density (PHD) filter represents a promising tool for tracking a group of moving targets with a time-varying number. However, inaccurate prior statistics of the random noise will degrade the performance of the PHD filter in many practical applications. This paper presents an adaptive Gaussian mixture PHD (AGM-PHD) filter for the multi-target tracking (MTT) problem in the scenario where both the mean and covariance of measurement noise sequences are unknown. The conventional PHD filters are extended to jointly estimate both the multi-target state and the aforementioned measurement noise statistics. In particular, the Normal-inverse-Wishart and Gaussian distributions are first integrated to represent the joint posterior intensity by transforming the measurement model into a new formulation. Then, the updating rule for the hyperparameters of the model is derived in closed form based on variational Bayesian (VB) approximation and Bayesian conjugate prior heuristics. Finally, the dynamic system state and the noise statistics are updated sequentially in an iterative manner. Simulations results with both constant velocity and constant turn model demonstrate that the AGM-PHD filter achieves comparable performance as the ideal PHD filter with true measurement noise statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. ASYMPTOTIC THEORY FOR ESTIMATING THE SINGULAR VECTORS AND VALUES OF A PARTIALLY-OBSERVED LOW RANK MATRIX WITH NOISE.
- Author
-
Juhee Cho, Donggyu Kim, and Karl Rohe
- Subjects
LOW-rank matrices ,ASYMPTOTIC theory in mathematical statistics ,PROBABILITY theory ,QUANTITATIVE research ,ESTIMATION theory ,STATISTICS - Abstract
Matrix completion algorithms recover a low rank matrix from a small fraction of the entries, each entry contaminated with additive errors. In practice, the singular vectors and singular values of the low rank matrix play a pivotal role for statistical analyses and inferences. This paper proposes estimators of these quantities and studies their asymptotic behavior. Under the setting where the dimensions of the matrix increase to infinity and the probability of observing each entry is identical, Theorem 1 gives the rate of convergence for the estimated singular vectors; Theorem 3 gives a multivariate central limit theorem for the estimated singular values. Even though the estimators use only a partially observed matrix, they achieve the same rates of convergence as the fully observed case. These estimators combine to form a consistent estimator of the full low rank matrix that is computed with a non-iterative algorithm. In the cases studied in this paper, this estimator achieves the minimax lower bound in Koltchinskii, Lounici and Tsybakov (2011). The numerical experiments corroborate our theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. Understanding the urban--rural disparity in HIV and poverty nexus: the case of Kenya.
- Author
-
Magadi, Monica A.
- Subjects
CHI-squared test ,FACTOR analysis ,HIV infections ,META-analysis ,METROPOLITAN areas ,MULTIVARIATE analysis ,POPULATION geography ,POVERTY ,PROBABILITY theory ,RURAL conditions ,SOCIOLOGY ,STATISTICS ,SURVEYS ,LOGISTIC regression analysis ,SECONDARY analysis ,SOCIOECONOMIC factors ,HEALTH & social status ,INTRACLASS correlation - Abstract
Background The relationship between HIV and poverty is complex and recent studies reveal an urban--rural divide that is not well understood. This paper examines the urban--rural disparity in the relationship between poverty and HIV infection in Kenya, with particular reference to possible explanations relating to social cohesion/capital and other moderating factors. Methods Multilevel logistic regression models are applied to nationally-representative samples of 13 094 men and women of reproductive age from recent Kenya Demographic and Health Surveys. Results The results confirm a disproportionate higher risk of HIV infection among the urban poor, despite a general negative association between poverty and HIV infection among rural residents. Estimates of intra-community correlations suggest lower social cohesion in urban than rural communities. This, combined with marked socio-economic inequalities in urban areas is likely to result in the urban poor being particularly vulnerable. The results further reveal interesting cultural variations and trends. In particular, recent declines in HIV prevalence among urban residents in Kenya have been predominantly confined to those of higher socio-economic status. Conclusion With current rapid urbanization patterns and increasing urban poverty, these trends have important implications for the future of the HIV epidemic in Kenya and similar settings across the sub-Saharan Africa region. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
41. The use of technology in teaching and learning the topic of probability and statistics in school mathematics.
- Author
-
Shyti, Bederiana, Valera, Dhurata, and Starja, Diana
- Subjects
- *
MATHEMATICS , *PROBABILITY theory , *PROJECT method in teaching , *STATISTICS , *SECONDARY school students - Abstract
The use of technology in teaching and learning the topic of probability and statistics in the educational system in our country, Albania, has the potential to be encouraged in different level of classes in the educational system. Lee and Hollebrands (2011) have done better characterization of teaching probability and statistics with the help of technology, through the integration of Statistical Knowledge, Technological Statistical Knowledge, and Technological Pedagogical Statistical Knowledge. Also, we have integrated this framework with Pedagogical Knowledge as a very important factor. In this paper is given as a case study, the process of mastering the concepts of probability and statistics, with students of a lower secondary school in our country to understand the role of technology in the process of teaching and learning, realized concretely during the lessons, specifically throughout a project-based learning by demonstrating the results of the project by the students themselves. Our students recognize and analyze the data, then collect and represent it, to provide graphical presentation Thus, we focus on the potential advantages of using technology in teaching and learning the topics of probability and statistics in school mathematics during different levels of educational process in our country and beyond. [ABSTRACT FROM AUTHOR]
- Published
- 2023
42. Tobacco quitline performance: Comparing the impacts of early cessation and proactive re-engagement on callers' smoking status at follow-up at 12 months.
- Author
-
Cassidy, Daniel G., Xin-Qun Wang, Mallawaarachchi, Indika, Wiseman, Kara P., Ebbert, Jon O., Blue Star, John A., Aycock, Chase A., Estevez Burns, Rosemary, Jones, John R., Krunnfusz, Andrea E., Halbert, Jennifer P., Roy, Natalie M., Ellis, Jordan M., Williams, Juinell B., Klesges, Robert C., and Talcott, Gerald W.
- Subjects
STATISTICS ,SMOKING cessation ,PATIENT participation ,CONFIDENCE intervals ,HEALTH status indicators ,HELPLINES ,NICOTINE replacement therapy ,DESCRIPTIVE statistics ,RESEARCH funding ,SMOKING ,LOGISTIC regression analysis ,ODDS ratio ,DATA analysis ,HEALTH promotion ,PROBABILITY theory - Abstract
INTRODUCTION While tobacco Quitlines are effective in the promotion of smoking cessation, the majority of callers who wish to quit still fail to do so. The aim of this study was to determine if 12-month tobacco Quitline smoking cessation rates could be improved with re-engagement of callers whose first Quitline treatment failed to establish abstinence. METHODS In an adaptive trial, 614 adult smokers, who were active duty, retired, and family of military personnel with TRICARE insurance who called a tobacco Quitline, received a previously evaluated and efficacious four-session tobacco cessation intervention with nicotine replacement therapy (NRT). At the scheduled follow-up at 3 months, callers who had not yet achieved abstinence were offered the opportunity to re-engage. This resulted in three caller groups: 1) those who were abstinent, 2) those who were still smoking but willing to re-engage with an additional Quitline treatment; and 3) individuals who were still smoking but declined re-engagement. A propensity score-adjusted logistic regression model was generated to compare past-7-day point prevalence abstinence at 12 months post Quitline consultation. RESULTS Using a propensity score adjusted logistic regression model, comparison of the three groups resulted in higher odds of past-7-day point prevalence abstinence at follow-up at 12 months for those who were abstinent at 3 months compared to those who re-engaged (OR=9.6; 95% CI: 5.2-17.8; Bonferroni adjusted p<0.0001), and relative to those who declined re-engagement (OR=13.4; 95% CI: 6.8-26.3; Bonferroni adjusted p<0.0001). There was no statistically significant difference in smoking abstinence between smokers at 3 months who re-engaged and those who declined re-engagement (OR=1.39; 95% CI: 0.68-2.85). CONCLUSIONS Tobacco Quitlines seeking to select a single initiative by which to maximize abstinence at follow-up at 12 months may benefit from diverting additional resources from the re-engagement of callers whose initial quit attempt failed, toward changes which increase callers' probability of success within the first 3 months of treatment. TRIAL REGISTRATION This study is registered at clinicaltrials.gov (NCT02201810). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. The accuracy and consistency of mastery for each content domain using the Rasch and deterministic inputs, noisy "and" gate diagnostic classification models: a simulation study and a real-world analysis using data from the Korean Medical Licensing Examination.
- Author
-
Dong Gi Seo and Jae Kum Kim
- Subjects
STATISTICS ,RESEARCH evaluation ,PROFESSIONAL licenses ,PSYCHOMETRICS ,COMPARATIVE studies ,DATA analysis ,STATISTICAL models ,STATISTICAL correlation ,PROBABILITY theory - Abstract
Purpose: Diagnostic classification models (DCMs) were developed to identify the mastery or non-mastery of the attributes required for solving test items, but their application has been limited to very low-level attributes, and the accuracy and consistency of high-level attributes using DCMs have rarely been reported compared with classical test theory (CTT) and item response theory models. This paper compared the accuracy of high-level attribute mastery between deterministic inputs, noisy "and" gate (DINA) and Rasch models, along with sub-scores based on CTT. Methods: First, a simulation study explored the effects of attribute length (number of items per attribute) and the correlations among attributes with respect to the accuracy of mastery. Second, a real-data study examined model and item fit and investigated the consistency of mastery for each attribute among the 3 models using the 2017 Korean Medical Licensing Examination with 360 items. Results: Accuracy of mastery increased with a higher number of items measuring each attribute across all conditions. The DINA model was more accurate than the CTT and Rasch models for attributes with high correlations (>0.5) and few items. In the real-data analysis, the DINA and Rasch models generally showed better item fits and appropriate model fit. The consistency of mastery between the Rasch and DINA models ranged from 0.541 to 0.633 and the correlations of person attribute scores between the Rasch and DINA models ranged from 0.579 to 0.786. Conclusion: Although all 3 models provide a mastery decision for each examinee, the individual mastery profile using the DINA model provides more accurate decisions for attributes with high correlations than the CTT and Rasch models. The DINA model can also be directly applied to tests with complex structures, unlike the CTT and Rasch models, and it provides different diagnostic information from the CTT and Rasch models. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Inpatient falls prevention: state-wide survey to identify variability in Western Australian hospitals.
- Author
-
FERGUSON, CHANTAL and MASON, LOUISE
- Subjects
- *
AGE distribution , *COGNITION , *ACCIDENTAL falls , *LONGITUDINAL method , *MEDICAL records , *MULTIVARIATE analysis , *PROBABILITY theory , *RISK assessment , *SEX distribution , *STATISTICS , *SURVEYS , *EVIDENCE-based medicine , *LOGISTIC regression analysis , *PROFESSIONAL practice , *DISEASE prevalence , *DESCRIPTIVE statistics , *ACQUISITION of data methodology , *ODDS ratio - Abstract
Objective: A point prevalence survey was conducted across Western Australia to monitor adherence to evidence-based practices to prevent falls in hospitals. Study design and methods: A state-wide point prevalence survey of patients and their medical records was conducted across 20 hospitals, over 17 days during May 2014. The survey determined rates of: provision of verbal information to patients; completion of a falls risk screening tool and age based cognitive testing. Univariate and multivariate logistic regression was utilised to determine key risks and opportunities to improve. Results: Information was collected from 2,720 patients. The provision of verbal information to prevent falls, as recalled by patients was 60% (hospital range 35--88%). This was significantly higher for patients with a stay of six or more days or involved in rehabilitation care. Perinatal women were three times less likely to be provided with verbal falls prevention information. A falls risk screening tool was completed for 82% of patients (range 28-98%). Perinatal women, and both adult and paediatric patients compared to older adults, were significantly less likely to have a complete falls risk screening tool. Thirty seven percent of patients within the recommended age ranges had cognitive testing (range 0--87%). Short-term patients and those not involved in rehabilitation, were significantly less likely to have been tested. Discussion: The survey identified differences in patient care and supporting processes across all hospitals. The results have highlighted areas for improvement. Conclusion: There were wide variations across all the hospitals in the provision of falls information, completion of falls risk screening tools and cognitive testing. At significant risk of missing out on falls prevention strategies were short stay patients and perinatal women. Five hospitals had significantly low rates of cognitive testing, indicating a hospital- wide issue rather than specific patient cohorts. Subsequently, the importance of ensuring that falls prevention strategies are conducted is vital to reduce preventable inpatient falls in all care settings. Implications for research, policy and practice: * This was the first state-wide point prevalence study in WA and it has informed the need for further research into the implication of falls risk inpatients. * It was found that falls risk assessment was not conducted for each patient who met the screening criteria. A review of the criteria, and practicability to carry out the assessment may need to be further investigated to determine if the practice should be refined. What is already known about the topic? * Falls in hospitals are a frequent and largely considered preventable health concern. * Falls that occur in hospitals are associated with an increased length of stay and use of health resources. What this paper adds: * This paper offers a comprehensive insight into the variation in hospital falls prevention strategies, from a state-wide perspective. It also identifies perinatal women as a high-risk group who are missing out on falls prevention strategies despite having the potential to fall. * It also gives an insight to health services that not all at risk patients are being screened and those screened are not screened early in their inpatient stay which can be a risk to both patient and staff. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. Distributions of pattern statistics in sparse Markov models.
- Author
-
Martin, Donald E. K.
- Subjects
TIME series analysis ,CONDITIONAL probability ,STATISTICS ,PROBABILITY theory ,MARKOV processes - Abstract
Markov models provide a good approximation to probabilities associated with many categorical time series, and thus they are applied extensively. However, a major drawback associated with them is that the number of model parameters grows exponentially in the order of the model, and thus only very low-order models are considered in applications. Another drawback is lack of flexibility, in that Markov models give relatively few choices for the number of model parameters. Sparse Markov models are Markov models with conditioning histories that are grouped into classes such that the conditional probability distribution for members of each class is constant. The model gives a better handling of the trade-off between bias associated with having too few model parameters and variance from having too many. In this paper, methodology for efficient computation of pattern distributions through Markov chains with minimal state spaces is extended to the sparse Markov framework. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Análisis Estadístico de Sobretensiones por Maniobra de energización en Líneas de Transmisión de Extra Alta Tensión.
- Author
-
Calle, J. H. and Guamán, W. P.
- Subjects
WEIBULL distribution ,HIGH voltages ,VOLTAGE ,STATISTICS ,PROBABILITY theory - Abstract
Copyright of Revista Técnica Energía is the property of Centro Nacional de Control de Energia CENACE and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
47. Iron metabolism and lymphocyte characterisation during Covid-19 infection in ICU patients: an observational cohort study.
- Author
-
Bolondi, Giuliano, Russo, Emanuele, Gamberini, Emiliano, Circelli, Alessandro, Meca, Manlio Cosimo Claudio, Brogi, Etrusca, Viola, Lorenzo, Bissoni, Luca, Poletti, Venerino, and Agnoletti, Vanni
- Subjects
IRON metabolism ,MORTALITY risk factors ,C-reactive protein ,CELL receptors ,EPIDEMICS ,FERRITIN ,LENGTH of stay in hospitals ,HOSPITAL admission & discharge ,INTENSIVE care units ,IRON ,LACTATE dehydrogenase ,LONGITUDINAL method ,SCIENTIFIC observation ,PATIENTS ,PROBABILITY theory ,HEMOPHAGOCYTIC lymphohistiocytosis ,RISK assessment ,STATISTICS ,TRANSFERRIN ,CD4 antigen ,DATA analysis ,SEVERITY of illness index ,FIBRIN fibrinogen degradation products ,LYMPHOPENIA ,TROPONIN ,LYMPHOCYTE count ,MANN Whitney U Test ,KRUSKAL-Wallis Test ,COVID-19 ,BLOOD ,DISEASE risk factors - Abstract
Background: Iron metabolism and immune response to SARS-CoV-2 have not been described yet in intensive care patients, although they are likely involved in Covid-19 pathogenesis. Methods: We performed an observational study during the peak of pandemic in our intensive care unit, dosing D-dimer, C-reactive protein, troponin T, lactate dehydrogenase, ferritin, serum iron, transferrin, transferrin saturation, transferrin soluble receptor, lymphocyte count and NK, CD3, CD4, CD8 and B subgroups of 31 patients during the first 2 weeks of their ICU stay. Correlation with mortality and severity at the time of admission was tested with the Spearman coefficient and Mann–Whitney test. Trends over time were tested with the Kruskal–Wallis analysis. Results: Lymphopenia is severe and constant, with a nadir on day 2 of ICU stay (median 0.555 10
9 /L; interquartile range (IQR) 0.450 109 /L); all lymphocytic subgroups are dramatically reduced in critically ill patients, while CD4/CD8 ratio remains normal. Neither ferritin nor lymphocyte count follows significant trends in ICU patients. Transferrin saturation is extremely reduced at ICU admission (median 9%; IQR 7%), then significantly increases at days 3 to 6 (median 33%, IQR 26.5%, p value 0.026). The same trend is observed with serum iron levels (median 25.5 μg/L, IQR 69 μg/L at admission; median 73 μg/L, IQR 56 μg/L on days 3 to 6) without reaching statistical significance. Hyperferritinemia is constant during intensive care stay: however, its dosage might be helpful in individuating patients developing haemophagocytic lymphohistiocytosis. D-dimer is elevated and progressively increases from admission (median 1319 μg/L; IQR 1285 μg/L) to days 3 to 6 (median 6820 μg/L; IQR 6619 μg/L), despite not reaching significant results. We describe trends of all the abovementioned parameters during ICU stay. Conclusions: The description of iron metabolism and lymphocyte count in Covid-19 patients admitted to the intensive care unit provided with this paper might allow a wider understanding of SARS-CoV-2 pathophysiology. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
48. Improving the forecasting performance of temporal hierarchies.
- Author
-
Spiliotis, Evangelos, Petropoulos, Fotios, and Assimakopoulos, Vassilios
- Subjects
MATHEMATICAL functions ,PHYSICAL sciences ,COGNITIVE science ,APPLIED mathematics ,LIFE sciences - Abstract
Temporal hierarchies have been widely used during the past few years as they are capable to provide more accurate coherent forecasts at different planning horizons. However, they still display some limitations, being mainly subject to the forecasting methods used for generating the base forecasts and the particularities of the examined series. This paper deals with such limitations by considering three different strategies: (i) combining forecasts of multiple methods, (ii) applying bias adjustments and (iii) selectively implementing temporal hierarchies to avoid seasonal shrinkage. The proposed strategies can be applied either separately or simultaneously, being complements to the method considered for reconciling the base forecasts and completely independent from each other. Their effect is evaluated using the monthly series of the M and M3 competitions. The results are very promising, displaying lots of potential for improving the performance of temporal hierarchies, both in terms of accuracy and bias. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Crash severity analysis of nighttime and daytime highway work zone crashes.
- Author
-
Zhang, Kairan and Hassan, Mohamed
- Subjects
ROAD work zones ,TRAFFIC safety ,ROAD closures ,OLDER automobile drivers ,TRANSPORTATION agencies ,CIVIL engineering ,TRANSPORTATION safety measures - Abstract
Introduction: Egypt’s National Road Project is a large infrastructure project which presently aims to upgrade 2500 kilometers of road networks as well as construct 4000 kilometers of new roads to meet today’s need. This leads to an increase in the number of work zones on highways and therefore a rise in hazardous traffic conditions. This is why highways agencies are shifting towards night construction in order to reduce the adverse traffic impacts on the public. Although many studies have investigated work zone crashes, only a few studies provide comparative analysis of the difference between nighttime and daytime work zone crashes. Methods: Data from Egyptian long-term highway work zone projects between 2010 and 2016 are studied with respect to the difference in injury severity between nighttime and daytime crashes by using separate mixed logit models. Results: The results indicate that significant differences exist between factors contributing to injury severity. Four variables are found significant only in the nighttime model and four other variables significant in the daytime model. The results show that older and male drivers, the number of lane closures, sidewise crashes, and rainy weather have opposite effects on injury severity in nighttime and daytime crashes. The findings presented in this paper could serve as an aid for transportation agencies in development of efficient measures to improve safety in work zones. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Group association test using a hidden Markov model.
- Author
-
CHENG, YICHEN, DAI, JAMES Y., and KOOPERBERG, CHARLES
- Subjects
GENOMICS ,MARKOV processes ,BIOMETRIC research ,BIOLOGICAL mathematical modeling ,BIOMETRY ,PROBABILITY theory ,RESEARCH funding ,STATISTICS ,DATA analysis ,SEQUENCE analysis - Abstract
In the genomic era, group association tests are of great interest. Due to the overwhelming number of individual genomic features, the power of testing for association of a single genomic feature at a time is often very small, as are the effect sizes for most features. Many methods have been proposed to test association of a trait with a group of features within a functional unit as a whole, e.g. all SNPs in a gene, yet few of these methods account for the fact that generally a substantial proportion of the features are not associated with the trait. In this paper, we propose to model the association for each feature in the group as a mixture of features with no association and features with non-zero associations to explicitly account for the possibility that a fraction of features may not be associated with the trait while other features in the group are. The feature-level associations are first estimated by generalized linear models; the sequence of these estimated associations is then modeled by a hidden Markov chain. To test for global association, we develop a modified likelihood ratio test based on a log-likelihood function that ignores higher order dependency plus a penalty term. We derive the asymptotic distribution of the likelihood ratio test under the null hypothesis. Furthermore, we obtain the posterior probability of association for each feature, which provides evidence of feature-level association and is useful for potential follow-up studies. In simulations and data application, we show that our proposed method performs well when compared with existing group association tests especially when there are only few features associated with the outcome. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.