3,938 results on '"Statistics as Topic methods"'
Search Results
202. Estimation of the diaphragm neuromuscular efficiency index in mechanically ventilated critically ill patients.
- Author
-
Jansen D, Jonkman AH, Roesthuis L, Gadgil S, van der Hoeven JG, Scheffer GJ, Girbes A, Doorduin J, Sinderby CS, and Heunks LMA
- Subjects
- Aged, Cohort Studies, Electromyography methods, Female, Humans, Intensive Care Units organization & administration, Interactive Ventilatory Support methods, Male, Middle Aged, Prospective Studies, Respiration, Artificial methods, Severity of Illness Index, Statistics as Topic methods, Work of Breathing physiology, Critical Illness therapy, Diaphragm physiopathology, Efficiency physiology, Statistics as Topic standards
- Abstract
Background: Diaphragm dysfunction develops frequently in ventilated intensive care unit (ICU) patients. Both disuse atrophy (ventilator over-assist) and high respiratory muscle effort (ventilator under-assist) seem to be involved. A strong rationale exists to monitor diaphragm effort and titrate support to maintain respiratory muscle activity within physiological limits. Diaphragm electromyography is used to quantify breathing effort and has been correlated with transdiaphragmatic pressure and esophageal pressure. The neuromuscular efficiency index (NME) can be used to estimate inspiratory effort, however its repeatability has not been investigated yet. Our goal is to evaluate NME repeatability during an end-expiratory occlusion (NMEoccl) and its use to estimate the pressure generated by the inspiratory muscles (Pmus)., Methods: This is a prospective cohort study, performed in a medical-surgical ICU. A total of 31 adult patients were included, all ventilated in neurally adjusted ventilator assist (NAVA) mode with an electrical activity of the diaphragm (EAdi) catheter in situ. At four time points within 72 h five repeated end-expiratory occlusion maneuvers were performed. NMEoccl was calculated by delta airway pressure (ΔPaw)/ΔEAdi and was used to estimate Pmus. The repeatability coefficient (RC) was calculated to investigate the NMEoccl variability., Results: A total number of 459 maneuvers were obtained. At time T = 0 mean NMEoccl was 1.22 ± 0.86 cmH
2 O/μV with a RC of 82.6%. This implies that when NMEoccl is 1.22 cmH2 O/μV, it is expected with a probability of 95% that the subsequent measured NMEoccl will be between 2.22 and 0.22 cmH2O/μV. Additional EAdi waveform analysis to correct for non-physiological appearing waveforms, did not improve NMEoccl variability. Selecting three out of five occlusions with the lowest variability reduced the RC to 29.8%., Conclusions: Repeated measurements of NMEoccl exhibit high variability, limiting the ability of a single NMEoccl maneuver to estimate neuromuscular efficiency and therefore the pressure generated by the inspiratory muscles based on EAdi.- Published
- 2018
- Full Text
- View/download PDF
203. Time-series clustering of cage-level sea lice data.
- Author
-
Marques AR, Forde H, and Revie CW
- Subjects
- Algorithms, Animals, Cluster Analysis, Population Dynamics, Time Factors, Copepoda, Statistics as Topic methods
- Abstract
Sea lice Lepeophtheirus salmonis (Krøyer) are a major ectoparasite affecting farmed Atlantic salmon in most major salmon producing regions. Substantial resources are applied to sea lice control and the development of new technologies towards this end. Identifying and understanding how sea lice population patterns vary among cages on a salmon farm can be an important step in the design and analysis of any sea lice control strategy. Norway's intense monitoring efforts have provided salmon farmers and researchers with a wealth of sea lice infestation data. A frequently registered parameter is the number of adult female sea lice per cage. These time-series data can be analysed descriptively, the similarity between time-series quantified, so that groups and patterns can be identified among cages, using clustering algorithms capable of handling such dynamic data. We apply such algorithms to investigate the pattern of female sea lice counts among cages for three Atlantic salmon farms in Norway. A series of strategies involving a combination of distance measures and prototypes were explored and cluster evaluation was performed using cluster validity indices. Repeated agreement on cluster membership for different combinations of distance and centroids was taken to be a strong indicator of clustering while the stability of these results reinforced this likelihood. Though drivers behind clustering are not thoroughly investigated here, it appeared that fish weight at time of stocking and other management practices were strongly related to cluster membership. In addition to these internally driven factors it is also possible that external sources of infestation may drive patterns of sea lice infestation in groups of cages; for example, those most proximal to an external source. This exploratory method proved useful as a pattern discovery tool for cages in salmon farms., Competing Interests: We have the following interests: Henny Forde is employed by Måsøval Fiskeoppdrett AS, who provided the data for this study. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials, as detailed online in the guide for authors.
- Published
- 2018
- Full Text
- View/download PDF
204. [Simulation and comparison of techniques for the correction of incomplete data on age to calculate incidence rates].
- Author
-
Oliveira MM, Latorre MDRDO, Tanaka LF, and Curado MP
- Subjects
- Brazil epidemiology, Data Accuracy, Female, Humans, Incidence, Male, Reproducibility of Results, Time Factors, Urologic Neoplasms epidemiology, Age Factors, Databases as Topic standards, Health Information Systems standards, Registries standards, Statistics as Topic methods
- Abstract
The objective was to compare two techniques to estimate age in databases with incomplete records and analyze their application to the calculation of cancer incidence. The study used the database of the Population-Based Cancer Registry from the city of São Paulo, Brazil, containing cases of urinary tract cancer diagnosed from 1997 to 2013. Two techniques were applied to estimate age: correction factor and multiple imputation. Using binomial distribution, six databases were simulated with different proportions of incomplete data on patient's age (from 5% to 50%). The ratio between the incidence rates was calculated, using the complete database as reference, whose standardized incidence was 11.83/100,000; the other incidence rates in the databases, with at least 5% incomplete data for age, were underestimated. By applying the correction factors, the corrected rates did not differ from the standardized rates, but this technique does not allow correcting specific rates. Multiple imputation was useful for correcting the standardized and specific rates in databases with up to 30% of incomplete data, but the specific rates for individuals under 50 years of age were underestimated. Databases with 5% incomplete data or more require correction. Although the implementation of multiple imputation is complex, it proved to be superior to the correction factor. However, it should be used sparingly, since age-specific rates may remain underestimated.
- Published
- 2018
- Full Text
- View/download PDF
205. Non-urgent use of emergency departments: populations most likely to overestimate illness severity.
- Author
-
Andrews H and Kass L
- Subjects
- Adult, Aged, Cross-Sectional Studies, Educational Status, Emergency Service, Hospital organization & administration, Emergency Service, Hospital statistics & numerical data, Female, Humans, Income statistics & numerical data, Injury Severity Score, Male, Middle Aged, Patients statistics & numerical data, Pennsylvania, Statistics as Topic methods, Patient Acuity, Patients psychology, Statistics as Topic standards
- Abstract
Patients' overestimation of their illness severity appears to contribute to the national epidemic of emergency department (ED) overcrowding. This study aims to elucidate which patient populations are more likely to have a higher estimation of illness severity (EIS). The investigator surveyed demographic factors of all non-urgent patients at an academic ED. The patients and physicians were asked to estimate the patients' illness severity using a 1-10 scale with anchors. The difference of these values was taken and compared across patient demographic subgroups using a 2-sample t-test. One hundred and seventeen patients were surveyed. The mean patient EIS was 5.22 (IQR 4), while the mean physician EIS was less severe at 7.57 (IQR 3), a difference of 2.35 (p < 0.0001). Patient subgroups with the highest EIS compared to the physicians' EIS include those who were self-referred (difference of 2.65, p = 0.042), with income ≤ $25,000 (difference of 2.96, p = 0.004), with less than a college education (difference of 2.83, p = 0.018), and with acute-on-chronic musculoskeletal pain (difference of 4.17, p = 0.001). If we assume the physicians' EIS is closer to the true illness severity, patients with lower socioeconomic status, lower education status, who were self-referred, and who suffered from acute-on-chronic musculoskeletal pain are more likely to overestimate their illness severity and may contribute to non-urgent use of the ED. They may benefit from further education or resources for care to prevent ED misuse. The large difference of acute-on-chronic musculoskeletal pain may reflect a physician's bias to underestimate the severity of a patients' illness in this particular population.
- Published
- 2018
- Full Text
- View/download PDF
206. Comparison of Data on Serious Adverse Events and Mortality in ClinicalTrials.gov, Corresponding Journal Articles, and FDA Medical Reviews: Cross-Sectional Analysis.
- Author
-
Pradhan R and Singh S
- Subjects
- Clinical Trials as Topic methods, Cross-Sectional Studies, Drug-Related Side Effects and Adverse Reactions diagnosis, Drug-Related Side Effects and Adverse Reactions epidemiology, Humans, Mortality trends, Periodicals as Topic trends, Statistics as Topic methods, Statistics as Topic trends, United States epidemiology, United States Food and Drug Administration trends, Clinical Trials as Topic standards, Drug-Related Side Effects and Adverse Reactions mortality, Periodicals as Topic standards, Statistics as Topic standards, United States Food and Drug Administration standards
- Abstract
Introduction: Inconsistencies in data on serious adverse events (SAEs) and mortality in ClinicalTrials.gov and corresponding journal articles pose a challenge to research transparency., Objective: The objective of this study was to compare data on SAEs and mortality from clinical trials reported in ClinicalTrials.gov and corresponding journal articles with US Food and Drug Administration (FDA) medical reviews., Methods: We conducted a cross-sectional study of a randomly selected sample of new molecular entities approved during the study period 1 January 2013 to 31 December 2015. We extracted data on SAEs and mortality from 15 pivotal trials from ClinicalTrials.gov and corresponding journal articles (the two index resources), and FDA medical reviews (reference standard). We estimated the magnitude of deviations in rates of SAEs and mortality between the index resources and the reference standard., Results: We found deviations in rates of SAEs (30% in ClinicalTrials.gov and 30% in corresponding journal articles) and mortality (72% in ClinicalTrials.gov and 53% in corresponding journal articles) when compared with the reference standard. The intra-class correlation coefficient between the three resources was 0.99 (95% confidence interval [CI] 0.98-0.99) for SAE rates and 0.99 (95% CI 0.97-0.99) for mortality rates., Conclusion: There are differences in data on rates of SAEs and mortality in randomized clinical trials in both ClinicalTrials.gov and journal articles compared with FDA reviews. Further efforts should focus on decreasing existing discrepancies to enhance the transparency and reproducibility of data reporting in clinical trials.
- Published
- 2018
- Full Text
- View/download PDF
207. Non-bleeding Adverse Events with the Use of Direct Oral Anticoagulants: A Sequence Symmetry Analysis.
- Author
-
Maura G, Billionnet C, Coste J, Weill A, Neumann A, and Pariente A
- Subjects
- Administration, Oral, Aged, Aged, 80 and over, Cohort Studies, Female, France epidemiology, Humans, Male, Middle Aged, Anticoagulants administration & dosage, Anticoagulants adverse effects, Drug-Related Side Effects and Adverse Reactions diagnosis, Drug-Related Side Effects and Adverse Reactions epidemiology, Hemorrhage, Statistics as Topic methods
- Abstract
Introduction: Postmarketing pharmacovigilance reports have raised concerns about non-bleeding adverse events associated with direct oral anticoagulants (DOACs), but only limited results are available from large claims databases., Objective: The aim of this study was to assess the potential association between DOAC initiation and the onset of four types of non-bleeding adverse events by sequence symmetry analysis (SSA)., Methods: SSA was performed using nationwide data from the French National Healthcare databases (Régime Général, 50 million beneficiaries) to assess a cohort of 386,081 DOAC new users for the first occurrence of four types of non-bleeding outcomes: renal, hepatic, skin outcomes identified by using hospitalization discharge diagnoses, and gastrointestinal outcomes by using medication reimbursement. Asymmetry in the distribution of each investigated outcome occurring before and after initiation of DOAC therapy was used to test the association between DOAC therapy and these outcomes. SSA inherently controls for time-constant confounders, and adjusted sequence ratios were computed after correcting for temporal trends. Negative (glaucoma) and positive (bleeding, depressive disorders) control outcomes were used and analyses were replicated on a cohort of 310,195 patients initiating a vitamin K antagonist (VKA)., Results: This study demonstrated the expected positive association between either DOAC or VKA therapy and hospitalised bleeding and initiation of antidepressant therapy, while no association was observed between either DOAC or VKA therapy and initiation of antiglaucoma medications. For DOAC therapy, signals were the associations with hepatic outcomes, including acute liver injury [for the 3-month time window, aSR
3 = 2.71, 95% confidence interval (CI) 1.79-4.52]; gastrointestinal outcomes, including initiation of drugs for constipation and antiemetic drugs (aSR3 = 1.31, 95% CI 1.27-1.36; and 1.17, 95% CI 1.12-1.22, respectively); and kidney diseases (aSR3 = 1.33, 95% CI 1.29-1.37)., Conclusion: Results of this nationwide study suggest that DOACs are associated with rare but severe liver injury and more frequent gastrointestinal disorders. A low risk of kidney injury with DOAC therapy can also not be excluded.- Published
- 2018
- Full Text
- View/download PDF
208. Penalized spline smoothing using Kaplan-Meier weights with censored data.
- Author
-
Orbe J and Virto J
- Subjects
- Kaplan-Meier Estimate, Regression Analysis, Biometry methods, Statistics as Topic methods
- Abstract
In this paper, we consider the problem of nonparametric curve fitting in the specific context of censored data. We propose an extension of the penalized splines approach using Kaplan-Meier weights to take into account the effect of censorship and generalized cross-validation techniques to choose the smoothing parameter adapted to the case of censored samples. Using various simulation studies, we analyze the effectiveness of the censored penalized splines method proposed and show that the performance is quite satisfactory. We have extended this proposal to a generalized additive models (GAM) framework introducing a correction of the censorship effect, thus enabling more complex models to be estimated immediately. A real dataset from Stanford Heart Transplant data is also used to illustrate the methodology proposed, which is shown to be a good alternative when the probability distribution for the response variable and the functional form are not known in censored regression models., (© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.)
- Published
- 2018
- Full Text
- View/download PDF
209. Web Site and R Package for Computing E-values.
- Author
-
Mathur MB, Ding P, Riddell CA, and VanderWeele TJ
- Subjects
- Data Interpretation, Statistical, Humans, Internet, Observational Studies as Topic methods, Causality, Statistics as Topic methods
- Published
- 2018
- Full Text
- View/download PDF
210. Understanding Interventional Effects: A More Natural Approach to Mediation Analysis?
- Author
-
Moreno-Betancur M and Carlin JB
- Subjects
- Confounding Factors, Epidemiologic, Data Interpretation, Statistical, Humans, Models, Statistical, Probability, Randomized Controlled Trials as Topic, Causality, Statistics as Topic methods
- Abstract
The causal mediation literature has mainly focused on "natural effects" as measures of mediation, but these have been criticized for their reliance on empirically unverifiable assumptions. They are also impossible to estimate without additional untestable assumptions in the common situation of exposure-induced mediator-outcome confounding. "Interventional effects" have been proposed as alternative measures that overcome these limitations, and 2 versions have been described for the exposure-induced confounding problem. We aim to provide insight into the interpretation of these effects, particularly by describing randomized controlled trials that could hypothetically be conducted to estimate them. In contrast with natural effects, which are defined in terms of individual-level interventions, the definitions of interventional effects rely on population-level interventions. This distinction underpins the previously described advantages of interventional effects, and reflects a shift from individual effects to more tangible population-average effects. We discuss the conceptual and practical implications for the conduct of mediation analysis. See video abstract at, http://links.lww.com/EDE/B383.
- Published
- 2018
- Full Text
- View/download PDF
211. General single-index survival regression models for incident and prevalent covariate data and prevalent data without follow-up.
- Author
-
Chen SW and Chiang CT
- Subjects
- Analysis of Variance, Comorbidity trends, Computer Simulation, Incidence, Myocardial Infarction epidemiology, Prevalence, Statistics as Topic methods, Models, Statistical, Regression Analysis, Survival Analysis
- Abstract
This article mainly focuses on analyzing covariate data from incident and prevalent cohort studies and a prevalent sample with only baseline covariates of interest and truncation times. Our major task in both research streams is to identify the effects of covariates on a failure time through very general single-index survival regression models without observing survival outcomes. With a strict increase of the survival function in the linear predictor, the ratio of incident and prevalent covariate densities is shown to be a non-degenerate and monotonic function of the linear predictor under covariate-independent truncation. Without such a structural assumption, the conditional density of a truncation time in a prevalent cohort is ensured to be a non-degenerate function of the linear predictor. In light of these features, some innovative approaches, which are based on the maximum rank correlation estimation or the pseudo least integrated squares estimation, are developed to estimate the coefficients of covariates up to a scale factor. Existing theoretical results are further used to establish the n -consistency and asymptotic normality of the proposed estimators. Moreover, extensive simulations are conducted to assess and compare the finite-sample performance of various estimators. To illustrate the methodological ideas, we also analyze data from the Worcester Heart Attack Study and the National Comorbidity Survey Replication., (© 2017, The International Biometric Society.)
- Published
- 2018
- Full Text
- View/download PDF
212. The contribution of agonist and antagonist activities of α4β2* nAChR ligands to smoking cessation efficacy: a quantitative analysis of literature data.
- Author
-
Rollema H and Hurst RS
- Subjects
- Animals, Azepines therapeutic use, Benzazepines therapeutic use, Heterocyclic Compounds, 4 or More Rings therapeutic use, Humans, Ligands, Smoking epidemiology, Treatment Outcome, Varenicline therapeutic use, Nicotinic Agonists therapeutic use, Nicotinic Antagonists therapeutic use, Receptors, Nicotinic physiology, Smoking drug therapy, Smoking Cessation methods, Statistics as Topic methods
- Abstract
Rationale and Objective: Two mechanisms underlie smoking cessation efficacies of α4β2* nicotinic acetylcholine receptor (nAChR) agonists: a "nicotine-like" agonist activity reduces craving by substituting for nicotine during a quit attempt, and a "nicotine-blocking" antagonist activity attenuates reinforcement by competing with inhaled nicotine during a relapse. To evaluate the contribution of each mechanism to clinical efficacy, we estimated the degree of agonist and antagonist activities of nicotine replacement therapy (NRT), varenicline, cytisine, and the discontinued nAChR agonists dianicline, ABT-418, ABT-089, CP-601927, and CP-601932, relative to the functional effects of nicotine from smoking., Methods: Functional activities that occur in vivo with clinical doses were predicted from literature data on binding and functional potencies at the target α4β2 nAChR, as well as at α6β2* nAChRs, and from estimates of free drug exposures in human brain. Agonist activity is comprised of nAChR activation and desensitization, which were expressed as percentages of desensitization and activation by nicotine from smoking. Antagonist activity was expressed as the reduction in nAChR occupancy by nicotine during smoking in the presence of an agonist., Results: Comparisons with odds ratios at end of treatment suggest that extensive α4β2 and α6β2* nAChR desensitization combined with α6β2* nAChR activation at similar levels as nicotine from smoking is associated with clinical efficacy (NRT, varenicline, cytisine, ABT-418). Effective competition with inhaled nicotine for α4β2 and α6β2* nAChRs further improves clinical efficacy (varenicline). Other discontinued nAChR agonists have lower agonist and antagonist activities at α4β2 nAChRs and are inactive or less efficacious than NRT (dianicline, ABT-089, CP-601927, CP-601932)., Conclusion: Three pharmacological effects appear to be key factors underlying smoking cessation efficacy: the degree of activation of α6β2* nAChRs, desensitization of α4β2 and α6β2* nAChRs (agonist activity), and the reduction of nicotine occupancy at α4β2 and α6β2* nAChRs (antagonist activity). No single activity is dominant, and the level of smoking cessation efficacy depends on the profile of these activities achieved at clinical doses. While adequate agonist activity alone seems sufficient for a clinical effect (e.g., NRT, cytisine), clinical efficacy is improved with substantial competitive antagonism of α4β2 nAChRs, i.e., if the drug has a dual agonist-antagonist mechanism of action (e.g., varenicline).
- Published
- 2018
- Full Text
- View/download PDF
213. Selection criterion of work matrix as a function of limiting estimates of the covariance matrix of correlated data in GEE.
- Author
-
Silva JAD and Cirillo MA
- Subjects
- Analysis of Variance, Child, Coffee chemistry, Environmental Pollution statistics & numerical data, Humans, Monte Carlo Method, Statistics as Topic methods
- Abstract
The modeling of generalized estimating equations used in the analysis of longitudinal data whether in continuous or discrete variables, necessarily requires the prior specification of a correlation matrix in its iterative process in order to obtain the estimates of the regression parameters. Such a matrix is called working correlation matrix and its incorrect specification produces less efficient estimates for the model parameters. Due to this fact, this study aims to propose a selection criterion of working correlation matrix based on the covariance matrix estimates of correlated responses resulting from the limiting values of the association parameter estimates. For validation of the criterion, we used simulation studies considering normal and binary correlated responses. Compared to some criteria in the literature, it was concluded that the proposed criterion resulted in a better performance when the correlation structure for exchangeable working correlation matrix was considered as true structure in the simulated samples and for large samples, the proposed criterion showed similar behavior to the other criteria, resulting in higher success rates., (© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.)
- Published
- 2018
- Full Text
- View/download PDF
214. An alternative robust estimator of average treatment effect in causal inference.
- Author
-
Liu J, Ma Y, and Wang L
- Subjects
- Birth Weight, Computer Simulation, Female, Humans, Maternal-Fetal Exchange, Pregnancy, Smoking, Treatment Outcome, Models, Statistical, Propensity Score, Statistics as Topic methods
- Abstract
The problem of estimating the average treatment effects is important when evaluating the effectiveness of medical treatments or social intervention policies. Most of the existing methods for estimating the average treatment effect rely on some parametric assumptions about the propensity score model or the outcome regression model one way or the other. In reality, both models are prone to misspecification, which can have undue influence on the estimated average treatment effect. We propose an alternative robust approach to estimating the average treatment effect based on observational data in the challenging situation when neither a plausible parametric outcome model nor a reliable parametric propensity score model is available. Our estimator can be considered as a robust extension of the popular class of propensity score weighted estimators. This approach has the advantage of being robust, flexible, data adaptive, and it can handle many covariates simultaneously. Adopting a dimension reduction approach, we estimate the propensity score weights semiparametrically by using a non-parametric link function to relate the treatment assignment indicator to a low-dimensional structure of the covariates which are formed typically by several linear combinations of the covariates. We develop a class of consistent estimators for the average treatment effect and study their theoretical properties. We demonstrate the robust performance of the estimators on simulated data and a real data example of investigating the effect of maternal smoking on babies' birth weight., (© 2018, The International Biometric Society.)
- Published
- 2018
- Full Text
- View/download PDF
215. Exploratory dietary patterns: a systematic review of methods applied in pan-European studies and of validation studies.
- Author
-
Jannasch F, Riordan F, Andersen LF, and Schulze MB
- Subjects
- Diet Records, Europe, Humans, Reproducibility of Results, Diet, Diet Surveys, Feeding Behavior, Statistics as Topic methods, Validation Studies as Topic
- Abstract
Besides a priori approaches, using previous knowledge about food characteristics, exploratory dietary pattern (DP) methods, using data at hand, are commonly applied. This systematic literature review aimed to identify exploratory methods on DP in pan-European studies and to inform the development of the DEterminants of DIet and Physical ACtivity (DEDIPAC) toolbox of methods suitable for use in future European studies. The search was conducted in three databases on prospective studies in healthy, free-living people across the whole life span. To identify validated DP methods, an additional search without regional restrictions was conducted. Studies including at least two European countries were retained. The search resulted in six pan-European studies applying principal component/factor analysis (PC/FA) (n 5) or cluster analysis (n 2). The criteria to retain PC/factors ranged from the application of the eigenvalue>1 criterion, the scree plot and/or the interpretability criterion. Furthermore, seven validation studies were identified: DP, derived by PC/FA (n 6) or reduced rank regression (RRR) (n 1) were compared using dietary information from FFQ (n 6) or dietary history (n 1) as study instrument and dietary records (n 6) or 24-h dietary recalls (n 1) as reference. The correlation coefficients for the derived DP ranged from modest to high. To conclude, PC/FA was predominantly applied using the eigenvalue criterion and scree plot to retain DP, but a better description of the applied criteria is highly recommended to enable a standardised application of the method. Research gaps were identified for the methods cluster analysis and RRR, as well as for validation studies on DP.
- Published
- 2018
- Full Text
- View/download PDF
216. Semiparametric estimation of the accelerated mean model with panel count data under informative examination times.
- Author
-
Chiou SH, Xu G, Yan J, and Huang CY
- Subjects
- Chemoprevention methods, Chemoprevention statistics & numerical data, Clinical Trials as Topic, Computer Simulation, Recurrence, Regression Analysis, Sample Size, Skin Neoplasms prevention & control, Statistics as Topic methods, Time Factors
- Abstract
Panel count data arise when the number of recurrent events experienced by each subject is observed intermittently at discrete examination times. The examination time process can be informative about the underlying recurrent event process even after conditioning on covariates. We consider a semiparametric accelerated mean model for the recurrent event process and allow the two processes to be correlated through a shared frailty. The regression parameters have a simple marginal interpretation of modifying the time scale of the cumulative mean function of the event process. A novel estimation procedure for the regression parameters and the baseline rate function is proposed based on a conditioning technique. In contrast to existing methods, the proposed method is robust in the sense that it requires neither the strong Poisson-type assumption for the underlying recurrent event process nor a parametric assumption on the distribution of the unobserved frailty. Moreover, the distribution of the examination time process is left unspecified, allowing for arbitrary dependence between the two processes. Asymptotic consistency of the estimator is established, and the variance of the estimator is estimated by a model-based smoothed bootstrap procedure. Numerical studies demonstrated that the proposed point estimator and variance estimator perform well with practical sample sizes. The methods are applied to data from a skin cancer chemoprevention trial., (© 2017, The International Biometric Society.)
- Published
- 2018
- Full Text
- View/download PDF
217. The effectiveness of scaling procedures for comparing ground reaction forces.
- Author
-
Stickley CD, Andrews SN, Parke EA, and Hetzler RK
- Subjects
- Adult, Female, Humans, Kinetics, Linear Models, Male, Nonlinear Dynamics, Young Adult, Anthropometry, Mechanical Phenomena, Statistics as Topic methods
- Abstract
Various scaling methods are used when attempting to remove the influence of anthropometric differences on ground reaction forces (GRF) when comparing groups. Though commonly used, ratio scaling often results in an over-correction. Allometric scaling has previously been suggested for kinetic variables but its effectiveness in partialing out the effect of anthropometrics is unknown due to a lack of consistent application. This study examined the effectiveness of allometric scaling vertical, braking and propulsive GRF and loading rate for 84 males and 47 females while running at 4.0 m/s. Raw, unfiltered data were ratio scaled by body mass (BM), height (HT), and BM multiplied by HT (BM∗HT). Gender specific exponents for allometric scaling were determined by performing a log-linear (for BM and HT individually) or log-multilinear regression (BMHT). Pearson productmoment correlations were used to assess the effectiveness of each scaling method. Ratio scaling by BM, HT, or BM∗HT resulted in an over-correction of the data for most variables and left a considerable portion of the variance still attributable to anthropometrics. Allometric scaling by BM successfully removed the effect of BM and HT for all variables except for braking GRF in males and vertical GRF in females. However, allometric scaling for BMHT successfully removed the effect of BM and HT for all reactionary forces in both genders. Based on these results, allometric scaling for BMHT was the most appropriate scaling method for partialing out the effect of BM and HT on kinetic variables to allow for effective comparisons between groups or individuals., (Copyright © 2018 Elsevier Ltd. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
218. Estimating Future Health Technology Diffusion Using Expert Beliefs Calibrated to an Established Diffusion Model.
- Author
-
Grimm SE, Stevens JW, and Dixon S
- Subjects
- Humans, Inventions trends, Statistics as Topic methods, Time Factors
- Abstract
Objectives: Estimates of future health technology diffusion, or future uptake over time, are a requirement for different analyses performed within health technology assessments. Methods for obtaining such estimates include constant uptake estimates based on expert opinion or analogous technologies and on extrapolation from initial data points using parametric curves-but remain divorced from established diffusion theory and modeling. We propose an approach to obtaining diffusion estimates using experts' beliefs calibrated to an established diffusion model to address this methodologic gap., Methods: We performed an elicitation of experts' beliefs on future diffusion of a new preterm birth screening illustrative case study technology. The elicited quantities were chosen such that they could be calibrated to yield the parameters of the Bass model of new product growth, which was chosen based on a review of the diffusion literature., Results: With the elicitation of only three quantities per diffusion curve, our approach enabled us to quantify uncertainty about diffusion of the new technology in different scenarios. Pooled results showed that the attainable number of adoptions was predicted to be relatively low compared with what was thought possible. Further research evidence improved the attainable number of adoptions only slightly but resulted in greater speed of diffusion., Conclusions: The proposed approach of eliciting experts' beliefs about diffusion and informing the Bass model has the potential to fill the methodologic gap evident in value of implementation and research, as well as budget impact and some cost-effectiveness analyses., (Copyright © 2018 ISPOR–The Professional Society for Health Economics and Outcomes Research. Published by Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
219. Comparison of Low-Density Lipoprotein Cholesterol Assessment by Martin/Hopkins Estimation, Friedewald Estimation, and Preparative Ultracentrifugation: Insights From the FOURIER Trial.
- Author
-
Martin SS, Giugliano RP, Murphy SA, Wasserman SM, Stein EA, Ceška R, López-Miranda J, Georgiev B, Lorenzatti AJ, Tikkanen MJ, Sever PS, Keech AC, Pedersen TR, and Sabatine MS
- Subjects
- Aged, Antibodies, Monoclonal therapeutic use, Antibodies, Monoclonal, Humanized, Anticholesteremic Agents therapeutic use, Atherosclerosis drug therapy, Cholesterol, HDL analysis, Cholesterol, HDL blood, Cholesterol, LDL blood, Cholesterol, VLDL analysis, Cholesterol, VLDL blood, Female, Humans, Hyperlipidemias drug therapy, Male, Middle Aged, Randomized Controlled Trials as Topic, Risk Assessment, Triglycerides analysis, Triglycerides blood, Atherosclerosis blood, Cholesterol, LDL analysis, Hyperlipidemias blood, Statistics as Topic methods, Ultracentrifugation methods
- Abstract
Importance: Recent studies have shown that Friedewald underestimates low-density lipoprotein cholesterol (LDL-C) at lower levels, which could result in undertreatment of high-risk patients. A novel method (Martin/Hopkins) using a patient-specific conversion factor provides more accurate LDL-C levels. However, this method has not been tested in proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitor-treated patients., Objective: To investigate accuracy of 2 different methods for estimating LDL-C levels (Martin/Hopkins and Friedewald) compared with gold standard preparative ultracentrifugation (PUC) in patients with low LDL-C levels in the Further Cardiovascular Outcomes Research With PCSK9 Inhibition in Patients With Elevated Risk (FOURIER) trial., Design, Setting, and Participants: The FOURIER trial was a randomized clinical trial of evolocumab vs placebo added to statin therapy in 27 564 patients with stable atherosclerotic cardiovascular disease. The patients' LDL-C levels were assessed at baseline, 4 weeks, 12 weeks, 24 weeks, and every 24 weeks thereafter, and measured directly by PUC when the level was less than 40 mg/dL per the Friedewald method (calculated as non-HDL-C level - triglycerides/5). In the Martin/Hopkins method, patient-specific ratios of triglycerides to very low-density lipoprotein cholesterol (VLDL-C) ratios were determined and used to estimate VLDL-C, which was subtracted from the non-HDL-C level to obtain the LDL-C level., Main Outcomes and Measures: Low-density lipoprotein cholesterol calculated by the Friedewald and Martin/Hopkins methods, with PUC as the reference method., Results: For this analysis, the mean (SD) age was 62.7 (9.0) years; 2885 of the 12 742 patients were women (22.6%). A total of 56 624 observations from 12 742 patients had Friedewald, Martin/Hopkins, and PUC LDL-C measurements. The median difference from PUC LDL-C levels for Martin/Hopkins LDL-C levels was -2 mg/dL (interquartile range [IQR], -4 to 1 mg/dL) and for Friedewald LDL-C levels was -4 mg/dL (IQR, -8 to -1 mg/dL; P < .001). Overall, 22.9% of Martin/Hopkins LDL-C values were more than 5 mg/dL different than PUC values, and 2.6% were more than 10 mg/dL different than PUC levels. These were significantly less than respective proportions with Friedewald estimation (40.1% and 13.3%; P < .001), mainly because of underestimation by the Friedewald method. The correlation with PUC LDL-C was significantly higher for Martin/Hopkins vs Friedewald (ρ, 0.918 [95% CI 0.916-0.919] vs ρ, 0.867 [0.865-0.869], P < .001)., Conclusions and Relevance: In patients achieving low LDL-C with PCSK9 inhibition, the Martin/Hopkins method for LDL-C estimation more closely approximates gold standard PUC than Friedewald estimation does. The Martin/Hopkins method may prevent undertreatment because of LDL-C underestimation by the Friedewald method., Trial Registration: ClinicalTrials.gov Identifier: NCT01764633.
- Published
- 2018
- Full Text
- View/download PDF
220. A flexible and coherent test/estimation procedure based on restricted mean survival times for censored time-to-event data in randomized clinical trials.
- Author
-
Horiguchi M, Cronin AM, Takeuchi M, and Uno H
- Subjects
- Humans, Neoplasms therapy, Proportional Hazards Models, Statistics, Nonparametric, Time-Lapse Imaging, Randomized Controlled Trials as Topic methods, Statistics as Topic methods, Survival Analysis
- Abstract
In randomized clinical trials where time-to-event is the primary outcome, almost routinely, the logrank test is prespecified as the primary test and the hazard ratio is used to quantify treatment effect. If the ratio of 2 hazard functions is not constant, the logrank test is not optimal and the interpretation of hazard ratio is not obvious. When such a nonproportional hazards case is expected at the design stage, the conventional practice is to prespecify another member of weighted logrank tests, eg, Peto-Prentice-Wilcoxon test. Alternatively, one may specify a robust test as the primary test, which can capture various patterns of difference between 2 event time distributions. However, most of those tests do not have companion procedures to quantify the treatment difference, and investigators have fallen back on reporting treatment effect estimates not associated with the primary test. Such incoherence in the "test/estimation" procedure may potentially mislead clinicians/patients who have to balance risk-benefit for treatment decision. To address this, we propose a flexible and coherent test/estimation procedure based on restricted mean survival time, where the truncation time τ is selected data dependently. The proposed procedure is composed of a prespecified test and an estimation of corresponding robust and interpretable quantitative treatment effect. The utility of the new procedure is demonstrated by numerical studies based on 2 randomized cancer clinical trials; the test is dramatically more powerful than the logrank, Wilcoxon tests, and the restricted mean survival time-based test with a fixed τ, for the patterns of difference seen in these cancer clinical trials., (Copyright © 2018 John Wiley & Sons, Ltd.)
- Published
- 2018
- Full Text
- View/download PDF
221. Semantics for an Integrative and Immersive Pipeline Combining Visualization and Analysis of Molecular Data.
- Author
-
Trellet M, Férey N, Flotyński J, Baaden M, and Bourdot P
- Subjects
- Humans, Imaging, Three-Dimensional methods, Models, Structural, Statistics as Topic methods, User-Computer Interface, Computer Graphics, Semantics, Software
- Abstract
The advances made in recent years in the field of structural biology significantly increased the throughput and complexity of data that scientists have to deal with. Combining and analyzing such heterogeneous amounts of data became a crucial time consumer in the daily tasks of scientists. However, only few efforts have been made to offer scientists an alternative to the standard compartmentalized tools they use to explore their data and that involve a regular back and forth between them. We propose here an integrated pipeline especially designed for immersive environments, promoting direct interactions on semantically linked 2D and 3D heterogeneous data, displayed in a common working space. The creation of a semantic definition describing the content and the context of a molecular scene leads to the creation of an intelligent system where data are (1) combined through pre-existing or inferred links present in our hierarchical definition of the concepts, (2) enriched with suitable and adaptive analyses proposed to the user with respect to the current task and (3) interactively presented in a unique working environment to be explored.
- Published
- 2018
- Full Text
- View/download PDF
222. Odds Ratios-Current Best Practice and Use.
- Author
-
Norton EC, Dowd BE, and Maciejewski ML
- Subjects
- Data Interpretation, Statistical, Humans, Risk Factors, Logistic Models, Odds Ratio, Statistics as Topic methods
- Published
- 2018
- Full Text
- View/download PDF
223. The convergence analysis of SpikeProp algorithm with smoothing L 1∕2 regularization.
- Author
-
Zhao J, Zurada JM, Yang J, and Wu W
- Subjects
- Breast Neoplasms epidemiology, Female, Humans, Statistics as Topic trends, Algorithms, Neural Networks, Computer, Statistics as Topic methods
- Abstract
Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L
1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification., (Copyright © 2018 Elsevier Ltd. All rights reserved.)- Published
- 2018
- Full Text
- View/download PDF
224. The use of classification tree analysis to assess the influence of surgical timing on neurological recovery following severe cervical traumatic spinal cord injury.
- Author
-
Facchinello Y, Richard-Denis A, Beauséjour M, Thompson C, and Mac-Thiong JM
- Subjects
- Cervical Vertebrae surgery, Female, Humans, Male, Operative Time, Prospective Studies, Retrospective Studies, Time Factors, Trauma Severity Indices, Treatment Outcome, Decompression, Surgical methods, Outcome Assessment, Health Care, Recovery of Function physiology, Spinal Cord Injuries surgery, Statistics as Topic methods
- Abstract
Study Design: Post hoc analysis of prospectively collected data., Objectives: Assess the influence of surgical timing on neurological recovery using classification tree analysis in patients sustaining cervical traumatic spinal cord injury., Setting: Hôpital du Sacré-Coeur de Montreal METHODS: 42 patients sustaining cervical SCI were followed for at least 6 months post injury. Neurological status was assessed from the American Spinal Injury Association impairment scale (AIS) and neurological level of injury (NLI) at admission and at follow-up. Age, surgical timing, AIS grade at admission and energy of injury were the four input parameters. Neurological recovery was quantified by the occurrence of improvement by at least one AIS grade, at least 2 AIS grades and at least 2 NLI., Results: Proportion of patients that improved at least one ASIA grade was higher in the group that received early surgery (75 vs. 41 %). The proportion of patients that improved two AIS grades was also higher in the group that received early surgery (67 vs. 38 %). Finally, 30 % of the patients that received early decompression improved two NLI as compared with 0% in the other group. Early surgery was also associated with a non-statistically significant improvement in functional recovery., Conclusions: Neurological recovery of patients sustaining cervical traumatic spinal cord injury can be improved by early decompression surgery performed within 19 h post trauma., Sponsorship: U.S. Army Medical Research and Material Command, Rick Hansen Institute.
- Published
- 2018
- Full Text
- View/download PDF
225. Unstructured Formulation Data Analysis for the Optimization of Lipid Nanoparticle Drug Delivery Vehicles.
- Author
-
Silva J, Mendes M, Cova T, Sousa J, Pais A, and Vitorino C
- Subjects
- Drug Carriers, Drug Compounding, Drug Liberation, Excipients, Lipids, Nanoparticles metabolism, Particle Size, Surface-Active Agents, Drug Delivery Systems methods, Nanoparticles chemistry, Principal Component Analysis methods, Statistics as Topic methods
- Abstract
Designing nanoparticle formulations with features tailored to their therapeutic targets in demanding timelines assumes increased importance. In this context, nanostructured lipid carriers (NLCs) offer an excellent example of a drug delivery nanosystem that has been broadly explored in the treatment of glioblastoma multiforme (GBM). Distinct fundamental NLC quality attributes can be harnessed to fit this purpose, namely particle size, size distribution, and zeta potential. These critical aspects intrinsically depend on the formulation components, influencing drug loading capacity, drug release, and stability of the NLCs. Wide variations in their composition, including the type of lipids and other surface modifier excipients, lead to differences on these parameters. NLC target product profile involves small mean particle sizes, narrow size distributions, and absolute values of zeta potential higher than 30 mV. In this work, a wealth of data previously obtained in experiments on NLC preparation, encompassing, e.g., results of preliminary studies and those of intermediate formulations, is analyzed in order to extract information useful in further optimization studies. Principal component analysis (PCA) and partial least squares (PLS) are performed to evaluate the influence of NLC composition on the respective characteristics. These methods provide a rapid and discriminatory analysis for establishing a preformulation framework, by selecting the most suitable types of lipids, surfactants, surface modifiers, and drugs, within the set of investigated variables. The results have direct implications in the optimization of formulation and processes.
- Published
- 2018
- Full Text
- View/download PDF
226. Analytical Sigma metrics: A review of Six Sigma implementation tools for medical laboratories.
- Author
-
Westgard S, Bayat H, and Westgard JO
- Subjects
- Clinical Laboratory Techniques, Statistics as Topic methods
- Abstract
Sigma metrics have become a useful tool for all parts of the quality control (QC) design process. Through the allowable total error model of laboratory testing, analytical assay performance can be judged on the Six Sigma scale. This not only allows benchmarking the performance of methods and instruments on a universal scale, it allows laboratories to easily visualize performance, optimize the QC rules and numbers of control measurements they implement, and now even schedule the frequency of running those controls., Competing Interests: Potential conflict of interest: None declared.
- Published
- 2018
- Full Text
- View/download PDF
227. Design of cancer trials based on progression-free survival with intermittent assessment.
- Author
-
Zeng L, Cook RJ, and Lee KA
- Subjects
- Disease Progression, Humans, Likelihood Functions, Markov Chains, Models, Statistical, Neoplasms diagnosis, Neoplasms mortality, Proportional Hazards Models, Sample Size, Time Factors, Treatment Outcome, Neoplasms therapy, Progression-Free Survival, Randomized Controlled Trials as Topic methods, Statistics as Topic methods
- Abstract
Therapeutic advances in cancer mean that it is now impractical to performed phase III randomized trials evaluating experimental treatments on the basis of overall survival. As a result, the composite endpoint of progression-free survival has been routinely adopted in recent years as it is viewed as enabling a more timely and cost-effective approach to assessing the clinical benefit of novel interventions. This article considers design of cancer trials directed at the evaluation of treatment effects on progression-free survival. In particular, we derive sample size criteria based on an illness-death model that considers cancer progression and death jointly while accounting for the fact that progression is assessed only intermittently. An alternative approach to design is also considered in which the sample size is derived based on a misspecified Cox model, which uses the documented time of progression as the progression time rather than dealing with the interval censoring. Simulation studies show the validity of the proposed methods., (Copyright © 2018 John Wiley & Sons, Ltd.)
- Published
- 2018
- Full Text
- View/download PDF
228. [On the Meaning of Statistical Significance].
- Author
-
Teixeira PM
- Subjects
- Statistics as Topic methods
- Published
- 2018
- Full Text
- View/download PDF
229. Cavitation-threshold Determination and Rheological-parameters Estimation of Albumin-stabilized Nanobubbles.
- Author
-
Lafond M, Watanabe A, Yoshizawa S, Umemura SI, and Tachibana K
- Subjects
- Albumins pharmacology, Contrast Media chemistry, Contrast Media pharmacology, Drug Stability, Humans, Manufactured Materials, Microtechnology, Particle Size, Statistics as Topic methods, Surface Tension, Ultrasonics methods, Viscosity, Albumins chemistry, Microbubbles, Physical Phenomena, Rheology methods
- Abstract
Nanobubbles (NBs) are of high interest for ultrasound (US) imaging as contrast agents and therapy as cavitation nuclei. Because of their instability (Laplace pressure bubble catastrophe) and low sensitivity to US, reducing the size of commonly used microbubbles to submicron-size is not trivial. We introduce stabilized NBs in the 100-250-nm size range, manufactured by agitating human serum albumin and perfluoro-propane. These NBs were exposed to 3.34- and 5.39-MHz US, and their sensitivity to US was proven by detecting inertial cavitation. The cavitation-threshold information was used to run a numerical parametric study based on a modified Rayleigh-Plesset equation (with a Newtonian rheology model). The determined values of surface tension ranged from 0 N/m to 0.06 N/m. The corresponding values of dilatational viscosity ranged from 5.10
-10 Ns/m to 1.10-9 Ns/m. These parameters were reported to be 0.6 N/m and 1.10-8 Ns/m for the reference microbubble contrast agent. This result suggests the possibility of using albumin as a stabilizer for the nanobubbles that could be maintained in circulation and presenting satisfying US sensitivity, even in the 3-5-MHz range.- Published
- 2018
- Full Text
- View/download PDF
230. Pinocchio testing in the forensic analysis of waiting lists: using public waiting list data from Finland and Spain for testing Newcomb-Benford's Law.
- Author
-
Pinilla J, López-Valcárcel BG, González-Martel C, and Peiro S
- Subjects
- Finland, Humans, National Health Programs, Probability, Research Design, Spain, Universal Health Insurance, Statistics as Topic methods, Waiting Lists
- Abstract
Objective: Newcomb-Benford's Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs., Design: Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson's χ
2 , mean absolute deviation and Kuiper tests., Setting/participants: Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards., Main Outcome Measures: Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL., Results: WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test)., Conclusions: Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing., Competing Interests: Competing interests: None declared., (© Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.)- Published
- 2018
- Full Text
- View/download PDF
231. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?
- Author
-
Chan WC, Papaconstantinou D, Lee M, Telfer K, Jo E, Drury PL, and Tobias M
- Subjects
- Administrative Claims, Healthcare statistics & numerical data, Adolescent, Adult, Aged, Aged, 80 and over, Algorithms, Child, Child, Preschool, Female, Humans, Infant, Infant, Newborn, Male, Middle Aged, New Zealand epidemiology, Pregnancy, Prevalence, Registries, Sensitivity and Specificity, Statistics as Topic methods, Young Adult, Diabetes Mellitus epidemiology, Diabetes Mellitus therapy, Health Resources statistics & numerical data
- Abstract
Aim: To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level., Methods: The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination., Results: The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe., Conclusion: The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data., (Copyright © 2018 Elsevier B.V. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
232. Estimation of energetic condition in wild baboons using fecal thyroid hormone determination.
- Author
-
Gesquiere LR, Pugh M, Alberts SC, and Markham AC
- Subjects
- Animals, Animals, Wild, Body Constitution physiology, Energy Intake physiology, Female, Glucocorticoids metabolism, Lactation physiology, Male, Papio metabolism, Physical Fitness physiology, Pregnancy, Seasons, Statistics as Topic methods, Thyroid Hormones metabolism, Energy Metabolism, Feces chemistry, Papio physiology, Reproduction physiology, Thyroid Hormones analysis
- Abstract
Understanding how environmental and social factors affect reproduction through variation in energetic condition remains understudied in wild animals, in large part because accurately and repeatedly measuring energetic condition in the wild is a challenge. Thyroid hormones (THs), such as triiodothyronine (T3) and thyroxine (T4), have a key role in mitigating metabolic responses to energy intake and expenditure, and therefore are considered important biomarkers of an animal's energetic condition. Recent method development has shown that T3 and T4 metabolites can be measured in feces, but studies measuring THs in wild populations remain rare. Here we measured fecal T3 metabolites (mT3) in baboons, and tested whether the conditions of collection and storage used for steroid hormones could also be used for mT3; we focused on mT3 as it is the biologically active form of TH and because fecal T4 metabolites (mT4) were below detection levels in our samples. We also tested if mT3 could be determined in freeze-dried samples stored for long periods of time, and if these concentrations reflected expected biological variations across seasons and reproductive states. Our results show that mT3 can be measured with accuracy and precision in baboon feces. The conditions of collection and storage we use for steroid hormones are appropriate for mT3 determination. In addition, mT3 concentrations can be determined in samples stored at -20 °C for up to 9 years, and are not predicted by the amount of time in storage. As expected, wild female baboons have lower mT3 concentrations during the dry season. Interestingly, mT3 concentrations are lower in pregnant and lactating females, possibly reflecting an energy sparing mechanism. Retroactive determination of mT3 concentration in stored, freeze-dried feces opens the door to novel studies on the role of energetic condition on fitness in wild animals., (Copyright © 2018 Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
233. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
- Author
-
Bord S, Bioche C, and Druilhet P
- Subjects
- Bayes Theorem, Models, Statistical, Population Density, Statistics as Topic methods
- Abstract
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets., (© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.)
- Published
- 2018
- Full Text
- View/download PDF
234. Suspect Screening Using LC-QqTOF Is a Useful Tool for Detecting Drugs in Biological Samples.
- Author
-
Colby JM, Thoren KL, and Lynch KL
- Subjects
- False Negative Reactions, False Positive Reactions, Humans, Statistics as Topic methods, Chromatography, Liquid methods, Substance Abuse Detection methods, Tandem Mass Spectrometry methods
- Abstract
High-resolution mass spectrometers (HRMS), including quadrupole time of flight mass analyzers (QqTOF), are becoming more prevalent as screening tools in clinical and forensic toxicology laboratories. Among other advantages, HRMS instruments can collect untargeted, full-scan mass spectra. These datasets can be analyzed retrospectively using a combination of techniques, which can extend the drug detection capabilities. Most laboratories using HRMS in production settings perform untargeted data collection, but analyze data in a targeted manner. To perform targeted analysis, a laboratory must first analyze a reference standard to determine the expected characteristics of a given compound. In an alternate technique known as suspect screening, compounds can be tentatively identified without the use of reference standards. Instead, predicted and/or intrinsic characteristics of a compound, such as the accurate mass, isotope pattern, and product ion spectrum are used to determine its presence in a sample. The fact that reference standards are not required a priori makes this data analysis approach very attractive, especially for the ever-changing landscape of novel psychoactive substances. In this work, we compared the performance of four data analysis workflows (targeted and three suspect screens) for a panel of 170 drugs and metabolites, detected by LC-QqTOF. We found that retention time was not required for drug identification; the suspect screen using accurate mass, isotope pattern, and product ion library matching was able to identify more than 80% of the drugs that were present in human urine samples. We showed that the inclusion of product ion spectral matching produced the largest decrease in false discovery and false negative rates, as compared to suspect screening using mass alone or using just mass and isotope pattern. Our results demonstrate the promise that suspect screening holds for building large, economical drug screens, which may be a key tool to monitor the use of emerging drugs of abuse, including novel psychoactive substances.
- Published
- 2018
- Full Text
- View/download PDF
235. A comparison of four-sample slope-intercept and single-sample 51Cr-EDTA glomerular filtration rate measurements.
- Author
-
Porter CA, Bradley KM, and McGowan DR
- Subjects
- Humans, Chromium Radioisotopes, Edetic Acid metabolism, Glomerular Filtration Rate, Statistics as Topic methods
- Abstract
The aim of this study was to verify, with a large dataset of 1394 Cr-EDTA glomerular filtration rate (GFR) studies, the equivalence of slope-intercept and single-sample GFR. Raw data from 1394 patient studies were used to calculate four-sample slope-intercept GFR in addition to four individual single-sample GFR values (blood samples taken at 90, 150, 210 and 270 min after injection). The percentage differences between the four-sample slope-intercept and each of the single-sample GFR values were calculated, to identify the optimum single-sample time point. Having identified the optimum time point, the percentage difference between the slope-intercept and optimal single-sample GFR was calculated across a range of GFR values to investigate whether there was a GFR value below which the two methodologies cannot be considered equivalent. It was found that the lowest percentage difference between slope-intercept and single-sample GFR was for the third blood sample, taken at 210 min after injection. The median percentage difference was 2.5% and only 6.9% of patient studies had a percentage difference greater than 10%. Above a GFR value of 30 ml/min/1.73 m, the median percentage difference between the slope-intercept and optimal single-sample GFR values was below 10%, and so it was concluded that, above this value, the two techniques are sufficiently equivalent. This study supports the recommendation of performing single-sample GFR measurements for GFRs greater than 30 ml/min/1.73 m.
- Published
- 2018
- Full Text
- View/download PDF
236. A Note on G-Estimation of Causal Risk Ratios.
- Author
-
Dukes O and Vansteelandt S
- Subjects
- Confounding Factors, Epidemiologic, Humans, Software, Biometry methods, Odds Ratio, Statistics as Topic methods
- Abstract
G-estimation is a flexible, semiparametric approach for estimating exposure effects in epidemiologic studies. It has several underappreciated advantages over other propensity score-based methods popular in epidemiology, which we review in this article. However, it is rarely used in practice, due to a lack of off-the-shelf software. To rectify this, we show a simple trick for obtaining G-estimators of causal risk ratios using existing generalized estimating equations software. We extend the procedure to more complex settings with time-varying confounders.
- Published
- 2018
- Full Text
- View/download PDF
237. Quantitative comparisons of three automated methods for estimating intracranial volume: A study of 270 longitudinal magnetic resonance images.
- Author
-
Shang X, Carlson MC, and Tang X
- Subjects
- Alzheimer Disease etiology, Atrophy diagnostic imaging, Atrophy pathology, Brain diagnostic imaging, Cohort Studies, Humans, Longitudinal Studies, Magnetic Resonance Imaging methods, Reproducibility of Results, Alzheimer Disease diagnostic imaging, Brain pathology, Magnetic Resonance Imaging statistics & numerical data, Statistics as Topic methods
- Abstract
Total intracranial volume (TIV) is often used as a measure of brain size to correct for individual variability in magnetic resonance imaging (MRI) based morphometric studies. An adjustment of TIV can greatly increase the statistical power of brain morphometry methods. As such, an accurate and precise TIV estimation is of great importance in MRI studies. In this paper, we compared three automated TIV estimation methods (multi-atlas likelihood fusion (MALF), Statistical Parametric Mapping 8 (SPM8) and FreeSurfer (FS)) using longitudinal T1-weighted MR images in a cohort of 70 older participants at elevated sociodemographic risk for Alzheimer's disease. Statistical group comparisons in terms of four different metrics were performed. Furthermore, sex, education level, and intervention status were investigated separately for their impacts on the TIV estimation performance of each method. According to our experimental results, MALF was the least susceptible to atrophy, while SPM8 and FS suffered a loss in precision. In group-wise analysis, MALF was the least sensitive method to group variation, whereas SPM8 was particularly sensitive to sex and FS was unstable with respect to education level. In terms of effectiveness, both MALF and SPM8 delivered a user-friendly performance, while FS was relatively computationally intensive., (Copyright © 2018 Elsevier B.V. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
238. Evaluation of biomarkers for treatment selection using individual participant data from multiple clinical trials.
- Author
-
Kang C, Janes H, Tajik P, Groen H, Mol B, Koopmans C, Broekhuijsen K, Zwertbroek E, van Pampus M, and Franssen M
- Subjects
- Female, Humans, Hypertension, Pregnancy-Induced epidemiology, Meta-Analysis as Topic, Models, Statistical, Pre-Eclampsia epidemiology, Pregnancy, Pregnancy Complications epidemiology, Randomized Controlled Trials as Topic statistics & numerical data, Risk Factors, Treatment Outcome, Biomarkers, Patient Selection, Randomized Controlled Trials as Topic methods, Statistics as Topic methods
- Abstract
Biomarkers that predict treatment effects may be used to guide treatment decisions, thus improving patient outcomes. A meta-analysis of individual participant data (IPD) is potentially more powerful than a single-study data analysis in evaluating markers for treatment selection. Our study was motivated by the IPD that were collected from 2 randomized controlled trials of hypertension and preeclampsia among pregnant women to evaluate the effect of labor induction over expectant management of the pregnancy in preventing progression to severe maternal disease. The existing literature on statistical methods for biomarker evaluation in IPD meta-analysis have evaluated a marker's performance in terms of its ability to predict risk of disease outcome, which do not directly apply to the treatment selection problem. In this study, we propose a statistical framework for evaluating a marker for treatment selection given IPD from a small number of individual clinical trials. We derive marker-based treatment rules by minimizing the average expected outcome across studies. The application of the proposed methods to the IPD from 2 studies in women with hypertension in pregnancy is presented., (Copyright © 2018 John Wiley & Sons, Ltd.)
- Published
- 2018
- Full Text
- View/download PDF
239. A recursive partitioning approach for subgroup identification in individual patient data meta-analysis.
- Author
-
Mistry D, Stallard N, and Underwood M
- Subjects
- Data Interpretation, Statistical, Humans, Low Back Pain therapy, Models, Statistical, Randomized Controlled Trials as Topic methods, Treatment Outcome, Meta-Analysis as Topic, Statistics as Topic methods
- Abstract
Background: Motivated by the setting of clinical trials in low back pain, this work investigated statistical methods to identify patient subgroups for which there is a large treatment effect (treatment by subgroup interaction). Statistical tests for interaction are often underpowered. Individual patient data (IPD) meta-analyses provide a framework with improved statistical power to investigate subgroups. However, conventional approaches to subgroup analyses applied in both a single trial setting and an IPD setting have a number of issues, one of them being that factors used to define subgroups are investigated one at a time. As individuals have multiple characteristics that may be related to response to treatment, alternative exploratory statistical methods are required., Methods: Tree-based methods are a promising alternative that systematically searches the covariate space to identify subgroups defined by multiple characteristics. A tree method in particular, SIDES, is described and extended for application in an IPD meta-analyses setting by incorporating fixed-effects and random-effects models to account for between-trial variation. The performance of the proposed extension was assessed using simulation studies. The proposed method was then applied to an IPD low back pain dataset., Results: The simulation studies found that the extended IPD-SIDES method performed well in detecting subgroups especially in the presence of large between-trial variation. The IPD-SIDES method identified subgroups with enhanced treatment effect when applied to the low back pain data., Conclusions: This work proposes an exploratory statistical approach for subgroup analyses applicable in any research discipline where subgroup analyses in an IPD meta-analysis setting are of interest., (© 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.)
- Published
- 2018
- Full Text
- View/download PDF
240. Quantifying the Precision of Single-Molecule Torque and Twist Measurements Using Allan Variance.
- Author
-
van Oene MM, Ha S, Jager T, Lee M, Pedaci F, Lipfert J, and Dekker NH
- Subjects
- Magnetic Phenomena, Statistics as Topic methods, Torque
- Abstract
Single-molecule manipulation techniques have provided unprecedented insights into the structure, function, interactions, and mechanical properties of biological macromolecules. Recently, the single-molecule toolbox has been expanded by techniques that enable measurements of rotation and torque, such as the optical torque wrench (OTW) and several different implementations of magnetic (torque) tweezers. Although systematic analyses of the position and force precision of single-molecule techniques have attracted considerable attention, their angle and torque precision have been treated in much less detail. Here, we propose Allan deviation as a tool to systematically quantitate angle and torque precision in single-molecule measurements. We apply the Allan variance method to experimental data from our implementations of (electro)magnetic torque tweezers and an OTW and find that both approaches can achieve a torque precision better than 1 pN · nm. The OTW, capable of measuring torque on (sub)millisecond timescales, provides the best torque precision for measurement times ≲10 s, after which drift becomes a limiting factor. For longer measurement times, magnetic torque tweezers with their superior stability provide the best torque precision. Use of the Allan deviation enables critical assessments of the torque precision as a function of measurement time across different measurement modalities and provides a tool to optimize measurement protocols for a given instrument and application., (Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
241. Inbreeding estimates in human populations: Applying new approaches to an admixed Brazilian isolate.
- Author
-
Lemes RB, Nunes K, Carnavalli JEP, Kimura L, Mingroni-Netto RC, Meyer D, and Otto PA
- Subjects
- Black People statistics & numerical data, Brazil epidemiology, Female, Genetic Markers, Genomics methods, Homozygote, Humans, Linkage Disequilibrium, Male, Pedigree, Polymorphism, Single Nucleotide, Statistics as Topic methods, Consanguinity, Genetics, Population methods, Genetics, Population statistics & numerical data
- Abstract
The analysis of genomic data (~400,000 autosomal SNPs) enabled the reliable estimation of inbreeding levels in a sample of 541 individuals sampled from a highly admixed Brazilian population isolate (an African-derived quilombo in the State of São Paulo). To achieve this, different methods were applied to the joint information of two sets of markers (one complete and another excluding loci in patent linkage disequilibrium). This strategy allowed the detection and exclusion of markers that biased the estimation of the average population inbreeding coefficient (Wright's fixation index FIS), which value was eventually estimated as around 1% using any of the methods we applied. Quilombo demographic inferences were made by analyzing the structure of runs of homozygosity (ROH), which were adapted to cope with a highly admixed population with a complex foundation history. Our results suggest that the amount of ROH <2Mb of admixed populations should be somehow proportional to the genetic contribution from each parental population.
- Published
- 2018
- Full Text
- View/download PDF
242. A statistical method for analyzing and comparing spatiotemporal cortical activation patterns.
- Author
-
Krauss P, Metzner C, Schilling A, Tziridis K, Traxdorf M, Wollbrink A, Rampp S, Pantev C, and Schulze H
- Subjects
- Action Potentials, Algorithms, Animals, Cerebral Cortex cytology, Electroencephalography, Humans, Magnetoencephalography, Mice, Somatosensory Cortex cytology, Somatosensory Cortex physiology, Spatio-Temporal Analysis, Brain Mapping, Cerebral Cortex physiology, Statistics as Topic methods
- Abstract
Information in the cortex is encoded in spatiotemporal patterns of neuronal activity, but the exact nature of that code still remains elusive. While onset responses to simple stimuli are associated with specific loci in cortical sensory maps, it is completely unclear how the information about a sustained stimulus is encoded that is perceived for minutes or even longer, when discharge rates have decayed back to spontaneous levels. Using a newly developed statistical approach (multidimensional cluster statistics (MCS)) that allows for a comparison of clusters of data points in n-dimensional space, we here demonstrate that the information about long-lasting stimuli is encoded in the ongoing spatiotemporal activity patterns in sensory cortex. We successfully apply MCS to multichannel local field potential recordings in different rodent models and sensory modalities, as well as to human MEG and EEG data, demonstrating its universal applicability. MCS thus indicates novel ways for the development of powerful read-out algorithms of spatiotemporal brain activity that may be implemented in innovative brain-computer interfaces (BCI).
- Published
- 2018
- Full Text
- View/download PDF
243. Patient complaints as a means to improve quality of hospital care. Results of a qualitative content analysis
- Author
-
Hoffmann S, Dreher-Hummel T, Dollinger C, and Frei IA
- Subjects
- Documentation methods, Documentation standards, Humans, Patient-Centered Care organization & administration, Patient-Centered Care standards, Statistics as Topic methods, Statistics as Topic organization & administration, Switzerland, Total Quality Management organization & administration, Total Quality Management standards, Communication, Nursing Service, Hospital organization & administration, Nursing Service, Hospital standards, Patient Satisfaction, Quality Improvement organization & administration, Quality Improvement standards
- Abstract
Background: Many hospitals have defined procedures for a complaint management. A systematic analysis of patient complaints helps to identify similar complaints and patterns so that targeted improvement measures can be derived (Gallagher & Mazor, 2015). Aim: Our three-month, nurse-led practice development project aimed 1) to identify complaints regarding communication issues, 2) to systemise and prioritise complaints regarding communication issues, and 3) to derive clinic-specific recommendations for improvement. Method: We analysed 273 complaints of patients documented by the quality management (secondary data analysis). Using content analysis and applying the coding taxonomy for inpatient complaints by Reader, Gillespie and Roberts (2014), we distinguished communication-related complaints. By further inductive differentiation of these complaints, we identified patterns and prioritised fields of action. Results: We identified 186 communication-related complaints divided into 16 subcategories. For each subcategory, improvement interventions were derived, discussed and prioritised. Conclusions: Thus, patient complaints provided an excellent opportunity for reflection and workplace learning for nurses. The analysis gave impulse to exemplify the subject “person-centered care” for nurses.
- Published
- 2018
- Full Text
- View/download PDF
244. HAPT2D: high accuracy of prediction of T2D with a model combining basic and advanced data depending on availability.
- Author
-
Di Camillo B, Hakaste L, Sambo F, Gabriel R, Kravic J, Isomaa B, Tuomilehto J, Alonso M, Longato E, Facchinetti A, Groop LC, Cobelli C, and Tuomi T
- Subjects
- Adult, Diabetes Mellitus, Type 2 epidemiology, Female, Finland epidemiology, Follow-Up Studies, Humans, Male, Middle Aged, Models, Theoretical, Predictive Value of Tests, Prospective Studies, Spain epidemiology, Statistics as Topic methods, Blood Glucose metabolism, Diabetes Mellitus, Type 2 blood, Diabetes Mellitus, Type 2 diagnosis, Statistics as Topic standards
- Abstract
Objective: Type 2 diabetes arises from the interaction of physiological and lifestyle risk factors. Our objective was to develop a model for predicting the risk of T2D, which could use various amounts of background information., Research Design and Methods: We trained a survival analysis model on 8483 people from three large Finnish and Spanish data sets, to predict the time until incident T2D. All studies included anthropometric data, fasting laboratory values, an oral glucose tolerance test (OGTT) and information on co-morbidities and lifestyle habits. The variables were grouped into three sets reflecting different degrees of information availability. Scenario 1 included background and anthropometric information; Scenario 2 added routine laboratory tests; Scenario 3 also added results from an OGTT. Predictive performance of these models was compared with FINDRISC and Framingham risk scores., Results: The three models predicted T2D risk with an average integrated area under the ROC curve equal to 0.83, 0.87 and 0.90, respectively, compared with 0.80 and 0.75 obtained using the FINDRISC and Framingham risk scores. The results were validated on two independent cohorts. Glucose values and particularly 2-h glucose during OGTT (2h-PG) had highest predictive value. Smoking, marital and professional status, waist circumference, blood pressure, age and gender were also predictive., Conclusions: Our models provide an estimation of patient's risk over time and outweigh FINDRISC and Framingham traditional scores for prediction of T2D risk. Of note, the models developed in Scenarios 1 and 2, only exploited variables easily available at general patient visits., (© 2018 European Society of Endocrinology.)
- Published
- 2018
- Full Text
- View/download PDF
245. Elegant grapheme-phoneme correspondence: a periodic chart and singularity generalization unify decoding.
- Author
-
Gates L
- Subjects
- Humans, Learning physiology, Statistics as Topic standards, Phonetics, Statistics as Topic methods
- Abstract
The accompanying article introduces highly transparent grapheme-phoneme relationships embodied within a Periodic table of decoding cells, which arguably presents the quintessential transparent decoding elements. The study then folds these cells into one highly transparent but simply stated singularity generalization-this generalization unifies the decoding cells (97% transparency). Deeper, the periodic table and singularity generalization together highlight the connectivity of the periodic cells. Moreover, these interrelated cells, coupled with the singularity generalization, clarify teaching targets and enable efficient learning of the letter-sound code. This singularity generalization, in turn, serves as a model for creating unified but easily stated subordinate generalizations for any one of the transparent cells or groups of cells shown within the tables. The article then expands the periodic cells into two tables of teacher-ready sample word lists-one table includes sample words for the basic and phonogram vowel cells, and the other table embraces word samples for the transparent consonant cells. The paper concludes with suggestions for teaching the cellular transparency embedded within reoccurring isolated words and running text to promote decoding automaticity of the periodic cells.
- Published
- 2018
- Full Text
- View/download PDF
246. Machine learning techniques for mass spectrometry imaging data analysis and applications.
- Author
-
Zhang Y and Liu X
- Subjects
- Humans, Machine Learning statistics & numerical data, Mass Spectrometry methods, Statistics as Topic methods
- Published
- 2018
- Full Text
- View/download PDF
247. Coupled generative adversarial stacked Auto-encoder: CoGASA.
- Author
-
Kiasari MA, Moirangthem DS, and Lee M
- Subjects
- Pattern Recognition, Visual, Neural Networks, Computer, Statistics as Topic methods
- Abstract
Coupled Generative Adversarial Network (CoGAN) was recently introduced in order to model a joint distribution of a multi modal dataset. The CoGAN model lacks the capability to handle noisy data as well as it is computationally expensive and inefficient for practical applications such as cross-domain image transformation. In this paper, we propose a new method, named the Coupled Generative Adversarial Stacked Auto-encoder (CoGASA), to directly transfer data from one domain to another domain with robustness to noise in the input data as well to as reduce the computation time. We evaluate the proposed model using MNIST and the Large-scale CelebFaces Attributes (CelebA) datasets, and the results demonstrate a highly competitive performance. Our proposed models can easily transfer images into the target domain with minimal effort., (Copyright © 2018 Elsevier Ltd. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
248. Do estimates of blood loss differ between student midwives and midwives? A multicenter cross-sectional study.
- Author
-
Pranal M, Guttmann A, Ouchchane L, Parayre I, Rivière O, Leroux S, Bonnefont S, Debost-Legrand A, and Vendittelli F
- Subjects
- Adult, Cross-Sectional Studies, Education, Nursing, Baccalaureate methods, Female, France, Humans, Midwifery education, Nurse Midwives psychology, Pregnancy, Reproducibility of Results, Statistics as Topic methods, Students, Nursing psychology, Surveys and Questionnaires, Clinical Competence standards, Parturition physiology, Postpartum Hemorrhage classification, Statistics as Topic standards
- Abstract
Objective: the principal objective of this study was to assess the quality of blood loss estimates by midwives and student midwives. The secondary objectives were: to assess the intraobserver agreement of visual blood estimates and the rate of underestimation of blood loss by participants, and to estimate the sensitivity, specificity, and negative likelihood ratio of these estimates for clinically pertinent blood losses (≥ 500mL and ≥ 1000mL)., Design: multicenter cross-sectional study., Setting: thirty-three French maternity units and 35 French midwifery schools participated in this study., Participants: volunteer French midwifery students (n = 463) and practicing midwives (n = 578)., Intervention: an online survey showed 16 randomly ordered photographs of 8 different simulated blood quantities (100, 150, 200, 300, 500, 850, 1000, and 1500mL) with a reference 50-mL image in each photo and asked participants to estimate the blood loss. The visual blood loss estimates were compared with Fisher's exact test. Intraobserver agreement for these estimates was assessed with a weighted kappa coefficient, and the negative predictive values (probability of no hemorrhage when visual estimate was negative) were calculated from prevalence rates in the literature., Findings: of the 16,656 estimates obtained, 34.1% were accurate, 37.2% underestimated the quantity presented, and 28.7% overestimated it. Analyses of the intraobserver reproducibility between the two estimates of the same photograph showed that agreement was highest (weighted kappa ≥ 0.8) for the highest values (1000mL, 1500mL). For each volume considered, students underestimated blood loss more frequently than midwives. In both groups, the negative predictive values regarding postpartum hemorrhage (PPH) diagnosis (severe or not) were greater than 98%., Key Conclusions and Implications for Practice: student midwives tended to underestimate the quantity of blood loss more frequently than the midwives. Postpartum hemorrhage (≥ 500mL) was always identified, but severe postpartum hemorrhage (≥ 1000mL) was identified in fewer than half the cases. These results should be taken into account in training both student midwives and practicing professionals., (Copyright © 2018 Elsevier Ltd. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
249. Statistical primer: methodology and reporting of meta-analyses.
- Author
-
Buccheri S, Sodeck GH, and Capodanno D
- Subjects
- Data Interpretation, Statistical, Humans, Models, Statistical, Publication Bias, Research Design, Meta-Analysis as Topic, Statistics as Topic methods
- Abstract
In modern medicine, the results of a comprehensive and methodologically sound meta-analysis bring the most robust, high-quality information to support evidence-based decision-making. With recent developments in newer meta-analytic approaches, iteration of statistical paradigms and software implementations, network and patient-level meta-analyses have recently gained popularity alongside conventional pairwise study-level meta-analyses. However, pitfalls are common in this challenging and rapidly evolving field of statistics. In this regard, guidelines have been introduced to standardize, strengthen and homogenize different aspects of conducting and reporting the results of a meta-analysis. Current recommendations advise a careful selection of the individual studies to be pooled, mainly based on the methodological quality and homogeneity in study designs. Indeed, even if a reasonable degree of variability across study results (namely, heterogeneity) can be accounted for with proper statistics (i.e. random-effect models), no adjustment can be performed in meta-analyses violating the issue of clinical validity and similarity across the included studies. In this context, this statistical primer aims at providing a conceptual framework, complemented by a practical example, for conducting, interpreting and critically evaluating meta-analyses.
- Published
- 2018
- Full Text
- View/download PDF
250. Trigger Criteria: Big Data.
- Author
-
Wong Lama KM and DeVita MA
- Subjects
- Humans, Electronic Health Records, Environmental Monitoring methods, Heart Arrest diagnosis, Hospital Rapid Response Team organization & administration, Risk Assessment methods, Statistics as Topic methods, Vital Signs physiology
- Abstract
Electronic medical records can be used to mine clinical data (big data), providing automated analysis during patient care. This article describes the source and potential impact of big data analysis on risk stratification and early detection of deterioration. It compares use of big data analysis with existing methods of identifying at-risk patients who require rapid response. Aggregate weighted scoring systems combined with big data analysis offer an opportunity to detect clinical changes that precede rapid response team activation. Future studies must determine if this will decrease transfers to intensive care units and cardiac arrests on the floors., (Copyright © 2017 Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.