2,749 results on '"random error"'
Search Results
2. Sources of PM2.5 exposure misclassification in three Finnish register-based study populations and the impact on attenuation of health effect estimates
- Author
-
Korhonen, Antti, Rumrich, Isabell Katharina, Roponen, Marjut, Frohn, Lise M., Geels, Camilla, Brandt, Jørgen, Tolppanen, Anna-Maija, and Hänninen, Otto
- Published
- 2024
- Full Text
- View/download PDF
3. A dynamical mathematical model for crime evolution based on a compartmental system with interactions.
- Author
-
Calatayud, Julia, Jornet, Marc, and Mateu, Jorge
- Subjects
- *
RANDOM dynamical systems , *BASIC reproduction number , *ORDINARY differential equations , *NONLINEAR regression , *SENSITIVITY analysis - Abstract
We use data on imprisonment in Spain to fit a system of three ordinary differential equations that describes the temporal evolution of three different groups in the country: offenders that are not in prison, offenders that are in prison, and the rest of people. These remaining people are considered as susceptible, who may become offenders by their relationships. That is, crime is regarded to behave as a social epidemic. We first investigate the dynamics of the model to find out when criminality becomes extinct or endemic in the long run, depending on the basic reproduction number. Then, we estimate the parameters of the model and conduct a sensitivity analysis. Finally, a random error is incorporated, and nonlinear regression is carried out to gather the unexplained variability of the data. Our results report a satisfactory model fitting to the crime data, closely delineating their dynamics. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Correction of Experimental Thermal Analysis Data and Kinetic Computations Using Isoconversional state Diagrams.
- Author
-
Tao, Qi and Krivec, Thomas
- Subjects
- *
THERMAL analysis , *DATA analysis , *PREDICTION models , *FORECASTING , *TEMPERATURE - Abstract
An isoconversional state diagram (ISD) method is a graphical representation of the commonly used model free kinetics (MFK) method. An ISD state curve describes the relationship between 1/
T and lnβ at a certain conversion degree, whereT is the temperature andβ is the heating rate. The ISD tangent rule describes the pattern of intersection of two tangents to two adjacent state curves at given points having the same temperature, which can be used for the correction of random error in the experimental thermal analysis data. The comparison of kinetic predictions between the ISD and MFK methods shows that the corrective effect is evident and the prediction quality can be improved in the predictable heating rate and temperature ranges. Furthermore, the proposed implicit method simplifies the ISD construction procedure compared to the previous explicit method. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
5. Precisão, Veracidade e Acurácia: Sobre Erros e Incertezas
- Author
-
Vinicius Francisco Rofatto, Ivandro Klein, and Marcelo Tomio Matsuoka
- Subjects
Standard Deviation ,RMSE ,Probability and Statistics ,Normal Distribution ,Random Error ,Systematic Error ,Geography. Anthropology. Recreation ,Cartography ,GA101-1776 - Abstract
Nesta contribuição, os conceitos de precisão e acurácia são revisados à luz do Vocabulário Internacional de Metrologia, publicado pelo Bureau Internacional de Pesos e Medidas (BIPM). Embora a literatura especializada traga clareza sobre o uso dos termos acurácia e precisão, estes ainda são frequentemente empregados de forma equivocada em diversas áreas do conhecimento. O termo veracidade (do inglês, trueness), menos comum nas áreas da Geomática, é frequentemente confundido com acurácia. Esses erros conceituais não se restringem apenas ao vocabulário, mas se propagam na análise dos resultados de um processo experimental de medição. Consequentemente, a interpretação dos resultados pode estar incorreta. Esses equívocos comuns originam-se da falta de compreensão da natureza do processo de medição: toda medida é, por definição, composta por seu valor verdadeiro—uma constante—e, inevitavelmente, pelo erro associado. Aqui, mostramos que o conceito de incerteza é mais aplicável que o conceito de erro. Ademais, demonstramos matematicamente como as métricas de precisão, veracidade e acurácia estão intimamente relacionadas aos componentes do erro, que são tradicionalmente classificados como sistemáticos e aleatórios. O recurso da simulação computacional por meio do Método de Monte Carlo é empregado aqui como uma ferramenta importante para compreender na prática o tema do presente trabalho.
- Published
- 2025
6. Modeling and Simulation of Camera Positioning Stance on the Basis of Different Error Sources.
- Author
-
Fan Chen, Ming Shang, Anjia Wang, Yadamsuren, Adiya, and Youqing Ma
- Subjects
- *
OPTICAL radar , *LIDAR , *CAMERA calibration , *FOREIGN bodies , *POINT cloud - Abstract
High-definition (HD) cameras and Light Detection and Ranging (LiDAR) are used for foreign object identification and volume alarm provision on coal transport lines, and improving the accuracy of jointly calibrated HD camera and LiDAR data is essential. To determine the effect of errors on camera parameters through the classical image backward rendezvous model, this study proposed an evaluation model for the error sources in backward rendezvous. Positioning accuracy was analyzed by assuming the existence of systematic errors in camera principal distance and image principal point offset and the presence of random errors in the image point and LiDAR point cloud coordinates. The accuracy of the computational model was verified through experiments. Results demonstrate that, the positioning accuracies are -23.2 mm ± 17.4 mm, -20.6 mm ± 17.5 mm, and -2.9 mm ± 19.6 mm at a given random error of 0.02 m in the control point and at an error of 1 pixel in the image point. Increasing the control point improved the accuracy of camera positioning. The systematic errors in the camera's principal distance and in the image point coordinates affect the accuracy of positioning. This study provides a theoretical basis for the joint calibration of HD cameras and LiDAR, which is crucial for improving the accuracy of foreign object identification and volume alarm functions in coal transport lines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Planning target volume margin in head and neck cancer patients undergoing radiation therapy: Estimations derived from own data and literature.
- Author
-
Aher, Pratik, Chirkute, Madhuri, Kale, Pournima, Sonawane, Rupesh, Singh, Ashok, and Datta, Niloy Ranjan
- Subjects
- *
HEAD & neck cancer , *RADIOTHERAPY , *CANCER patients , *NECK , *INTENSITY modulated radiotherapy - Abstract
Planning target volume (PTV) to deliver the desired dose to the clinical target volume (CTV) accounts for systematic (∑) and random (σ) errors during the planning and execution of intensity modulated radiation therapy (IMRT). As these errors vary at different departments, this study was conducted to determine the 3-dimensional PTV (PTV 3D) margins for head and neck cancer (HNC) at our center. The same was also estimated from reported studies for a comparative assessment. A total of 77 patients with HNCs undergoing IMRT were included. Of these, 39 patients received radical RT and 38 received postoperative IMRT. An extended no action level protocol was implemented using on-board imaging. Shifts in the mediolateral (ML), anteroposterior (AP), and superoinferior (SI) directions of each patient were recorded for every fraction. PTV margins in each direction (ML, AP, SI) and PTV 3D were calculated using van Herk's equation. Weighted PTV 3D was also computed from the ∑ and σ errors in each direction published in the literature for HNC. Our patients were staged T 2-4 (66/77) and N 0 (39/77). In all, 2280 on-board images were acquired, and daily shifts in each direction were recorded. The PTV margins in the ML, AP, and SI directions were computed as 3.2 mm, 2.9 mm, and 2.6 mm, respectively. The PTV 3D margin was estimated to be 6.5 mm. This compared well with the weighted median PTV 3D of 7.2 mm (range: 3.2 to 9.9) computed from the 16 studies reported in the literature. To ensure ≥95% CTV dose coverage in 90% of HNC patients, PTV 3D margin for our department was estimated as 6.5 mm. This agrees with the weighted median PTV 3D margin of 7.2 mm computed from the 16 published studies in HNCs. Site-specific PTV 3D margin estimations should be an integral component of the quality assurance protocol of each department to ensure adequate coverage of dose to CTV during IMRT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Assessment of accuracy of laboratory testing results, relative to peer group consensus values in external quality control, by bivariate z-score analysis: the example of D-Dimer.
- Author
-
Meijer, Piet, Sobas, Frederic, and Tsiamyrtzis, Panagiotis
- Subjects
- *
BIVARIATE analysis , *FIBRIN fragment D , *PEERS , *QUALITY control charts , *QUALITY control , *GAUSSIAN distribution , *LABORATORIES , *TESTING laboratories - Abstract
The aim of this study is to develop a practical method for bivariate z-score analysis which can be applied to the survey of an external quality assessment programme. To develop the bivariate z-score analysis, the results of four surveys of the international D-Dimer external quality assessment programme of 2022 of the ECAT Foundation were used. The proposed methodology starts by identifying the bivariate outliers, using a Supervised Sequential Hotelling T2 control chart. The outlying data are removed, and all the remaining data are used to provide robust estimates of the parameters of the assumed underlying bivariate normal distribution. Based on these estimates two nested homocentric ellipses are drawn, corresponding to confidence levels of 95 and 99.7 %. The bivariate z-score plot described provides the laboratory with an indication of both systematic and random deviations from zero z-score values. The bivariate z-score analysis was examined within survey 2022-D4 across the three most frequently used methods. The number of z-score pairs included varied between 830 and 857 and the number of bivariate outliers varied between 20 and 28. The correlation between the z-score pairs varied between 0.431 and 0.647. The correlation between the z-score pairs for the three most frequently used varied between 0.208 and 0.636. The use of the bivariate z-score analysis is of major importance when multiple samples are distributed around in the same survey and dependency of the results is likely. Important lessons can be drawn from the shape of the ellipse with respect to random and systematic deviations, while individual laboratories have been informed about their position in the state-of-the-art distribution and whether they have to deal with systematic and/or random deviations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Clinical Laboratory Statistics
- Author
-
Bailey, David N., Fitzgerald, Robert L., Bailey, David N., and Fitzgerald, Robert L.
- Published
- 2024
- Full Text
- View/download PDF
10. Basics of Uncertainty
- Author
-
Stinnett, J., Santi, P. A., Swinhoe, M. T., Geist, William H., editor, Santi, Peter A., editor, and Swinhoe, Martyn T., editor
- Published
- 2024
- Full Text
- View/download PDF
11. IOL Constant Optimization
- Author
-
Aristodemou, Petros, Singh, Arun D., Series Editor, Aramberri, Jaime, editor, Hoffer, Kenneth J., editor, Olsen, Thomas, editor, Savini, Giacomo, editor, and Shammas, H. John, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Experimentation in Psychology
- Author
-
Audusseau, Jean, Buchwald, Jed Z., Series Editor, Feingold, Mordechai, Advisory Editor, Franklin, Allan D., Advisory Editor, Shapiro, Alan E, Advisory Editor, Hoyningen-Huene, Paul, Advisory Editor, Lützen, Jesper, Advisory Editor, Newman, William R., Advisory Editor, Renn, Jürgen, Advisory Editor, Roland, Alex, Advisory Editor, Allamel-Raffin, Catherine, editor, Gangloff, Jean-Luc, editor, and Gingras, Yves, editor
- Published
- 2024
- Full Text
- View/download PDF
13. Wireless TCP Congestion Control Based on Loss Discrimination Approach Using Machine Learning
- Author
-
Dhawane, Pooja G., Navghare, Pranali, Manjre, Bhushan, Dhule, Piyush, Fale, Pradeep, Kahile, Milind, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Kumar, Amit, editor, and Mozar, Stefan, editor
- Published
- 2024
- Full Text
- View/download PDF
14. Global Navigation Satellite System Receiver Positioning in Harsh Environments via Clock Bias Prediction by Empirical Mode Decomposition and Back Propagation Neural Network Method.
- Author
-
Du, Libin, Chen, Hao, Yuan, Yibo, Song, Longjiang, and Meng, Xiangqian
- Subjects
- *
GLOBAL Positioning System , *HILBERT-Huang transform , *STATISTICAL bias , *BACK propagation , *GPS receivers - Abstract
This paper proposes a novel method to improve the clock bias short-term prediction accuracy of navigation receivers then solve the problem of low positioning accuracy when the satellite signal quality deteriorates. Considering that the clock bias of a navigation receiver is equivalent to a virtual satellite, the predicted value of clock bias is used to assist navigation receivers in positioning. Consequently, a combined prediction method for navigation receiver clock bias based on Empirical Mode Decomposition (EMD) and Back Propagation Neural Network (BPNN) analysis theory is demonstrated. In view of systematic errors and random errors in the clock bias data from navigation receivers, the EMD method is used to decompose the clock bias data; then, the BPNN prediction method is used to establish a high-precision clock bias prediction model; finally, based on the clock bias prediction value, the three-dimensional positioning of the navigation receiver is realized by expanding the observation equation. The experimental results show that the proposed model is suitable for clock bias time series prediction and providing three-dimensional positioning information meets the requirements of navigation application in the harsh environment of only three satellites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. 14 - Appraising Data Collection Methods
- Author
-
Sullivan-Bolyai, Susan and Bova, Carol
- Published
- 2022
- Full Text
- View/download PDF
16. Experimental Comparison of Interference and Autocollimating Null Indicators
- Author
-
B. Nyamweru and E. V. Shishalova
- Subjects
dynamic goniometer ,interference null indicator ,autocollimating null indicator ,optical polygon ,systematic error ,random error ,Electronics ,TK7800-8360 - Abstract
Introduction. At present, measurement of angles with high accuracy is an essential task in various scientific and industrial fields. The goniometer is one of the most widespread high-precision angle measuring devices, which can incorporate various types of null indicators. In turn, null indicators (NI) are based on different operating principles and can be sensitive to external factors that contribute to the measurement error.Aim. Experimental comparison of two NI types: an interference NI with a Koester prism and an autocollimating NI based on a quadrant photodiode.Materials and methods. An experimental setup was assembled, including two NI that could be connected to one goniometer and measure the accumulated angles of one optical polygon under the same conditions.Results. As a result of conducting measurements and performing a cross-calibration procedure, four sets of data were obtained. An analysis of the processed data showed that the difference in the errors of the ring laser when using two NI did not exceed 0.06 arc seconds, being within the margin of random error. At the same time, the difference between the deviations of the reflecting faces from the nominal position for the two faces exceeded this limit, which confirms the effect of deviation of the surface from the plane on angular measurements with different types of null indicators.Conclusion. According to the results obtained, from the random error point of view, the interference null indicator NI showed higher performance, demonstrating the RMS of measured values of 0.02 angular seconds when measured during 25 prism revolutions. At the same time, the autocollimating null indicator NI had an RMS at the level of 0.04 angular seconds when measured during 64 revolutions. Presumably, this may be caused by the installation specifics of NI. It should also be noted that there is no correlation between the statistical characteristics of the reflecting face itself and the difference between its deviations determined by different NI types.
- Published
- 2023
- Full Text
- View/download PDF
17. Development of an Information Accuracy Control System
- Author
-
Mehdiyeva, A. M., Sardarova, I. Z., Mahmudova, Z. A., Xhafa, Fatos, Series Editor, Shakya, Subarna, editor, Papakostas, George, editor, and Kamel, Khaled A., editor
- Published
- 2023
- Full Text
- View/download PDF
18. 4 - The study of risk factors and causation
- Published
- 2020
- Full Text
- View/download PDF
19. 多维变量对机械加工误差影响系统分析.
- Author
-
赵华伟, 刘燕军, 李 哲, and 李晨蕊
- Subjects
MACHINE parts ,STATISTICAL services ,CLUSTER analysis (Statistics) ,RANDOM variables ,STATISTICS ,MACHINING ,MACHINE theory - Abstract
Copyright of Ordnance Industry Automation is the property of Editorial Board for Ordnance Industry Automation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
20. Improving the Accuracy of TanDEM-X Digital Elevation Model Using Least Squares Collocation Method.
- Author
-
Shen, Xingdong, Zhou, Cui, and Zhu, Jianjun
- Subjects
- *
LEAST squares , *DIGITAL elevation models , *STANDARD deviations , *BACK propagation - Abstract
The TanDEM-X Digital Elevation Model (DEM) is limited by the radar side-view imaging mode, which still has gaps and anomalies that directly affect the application potential of the data. Many methods have been used to improve the accuracy of TanDEM-X DEM, but these algorithms primarily focus on eliminating systematic errors trending over a large area in the DEM, rather than random errors. Therefore, this paper presents the least-squares collocation-based error correction algorithm (LSC-TXC) for TanDEM-X DEM, which effectively eliminates both systematic and random errors, to enhance the accuracy of TanDEM-X DEM. The experimental results demonstrate that TanDEM-X DEM corrected by the LSC-TXC algorithm reduces the root mean square error (RMSE) from 6.141 m to 3.851 m, resulting in a significant improvement in accuracy (by 37.3%). Compared to three conventional algorithms, namely Random Forest, Height Difference Fitting Neural Network and Back Propagation in Neural Network, the presented algorithm demonstrates a reduction in the RMSEs of the corrected TanDEM-X DEMs by 6.5%, 7.6%, and 18.1%, respectively. This algorithm provides an efficient tool for correcting DEMs such as TanDEM-X for a wide range of areas. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. بررسي ميزان خطاهاي تصادفي و علل آن ها در آزمايشگاه یک بیمارستان آموزشی همدان طي سا لهاي 1399 تا 1400.
- Author
-
مهرناز قهرمانی, رسول سلیمی, فاطمه امیری, and مهتا رزاقی
- Subjects
- *
QUALITY control - Abstract
Introduction: Sometimes, laboratory diagnosis accompany by errors. Detection of these errors could be effective in preventing their occurrence. This study aimed to evaluate random errors and their causes in a Hamadan teaching hospital laboratory during the years of 2020 to 2021. Methods and Materials: In this cross-sectional and retrospective study, recorded random errors in a Hamadan teaching hospital laboratory was gathered during 2020 to 2021. The data were analyzed by SPSS software version 24. Frequency and percentage of desired variables were presented. Results: According to the results, 312 random errors had been recorded. 98 (31.42%) of them had been only detected by Westgard method and 214 (68.58%) had been detected by both of Westgard and WHO methods. 98 (31.42%) of the random errors had been detected by R4S Westgard rule and 214 (68.58%) of them had been detected by 13S Westgard and WHO rule. The most common causes of random errors were equipment failure (109, 34.93%), reagent failure (82, 26.30%), environmental factors (68, 21.79%) and human error or procedure change (54, 17.07%) respectively. The most errors were related to biochemistry 200 (64.10%), hematology 70 (22.43%) and coagulation department 42 (13.47%) respectively. Discussion and Conclusion: The main causes of random errors were equipment and reagent failure. Equipment update, reagent quality control and staff training could be effective in the reduction of these errors. [ABSTRACT FROM AUTHOR]
- Published
- 2023
22. Fire judgment method based on intelligent optimization algorithm and evidence fusion.
- Author
-
Junfeng, Dai and Li-hui, Fu
- Subjects
OPTIMIZATION algorithms ,FIRE detectors ,PARTICLE swarm optimization ,SWARM intelligence ,MATHEMATICAL optimization - Abstract
To reduce the adverse effects of inevitable error and random error in the fire data obtained by multiple sensors, in this paper, we propose a fire judgment method that uses Swarm intelligence optimization techniques and Evidence fusion. First, three sensors (CO, smoke and temperature) are used to obtain fire data which are processed by the method of the interval number processing, and the distances between the fire data and the characteristic value of fire grade are calculated. The reliability coefficient N k is optimized by Swarm intelligence algorithms to complete the modification of the mass function. Then, the combination rule of interval evidence and the modified mass function are synthesized to obtain the comprehensive interval evidence. Finally, the fire grade is judged according to the decision rule. We study the usability of these techniques for fire judgment and compare the optimization performance of the important Swarm intelligence algorithms, including traditional Particle Swarm Optimization (PSO ) and its improved algorithm (IPSO ), the latest algorithms, Black Widow Optimization (BWO ) and its improved algorithm (IBWO ), Bald Eagle Search Algorithm (BES ) and its improved algorithm (IBES ). The experimental results show that the average probabilities of IBWO , IBES and IPSO for obtaining the correct fire grades are 0.96, 0.88, and 0.86, respectively, the performance of three improved algorithms in fire judgment have been greatly increased, compared to traditional D - S evidence fusion method, the increase ratios of IBWO , IBES and IPSO are 43.3%, 31.3%, 28.4%. Therefore, the D - S evidence fusion method optimized by Swarm intelligence algorithms are better than that of traditional D - S evidence method for fire detection, which provides a new idea for fire detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Variance Control Procedures
- Author
-
Weiss, Heidi L., Wu, Jianrong, Epnere, Katrina, Williams, O. Dale, Meinert, Curtis L., Section editor, Piantadosi, Steven, Section editor, Piantadosi, Steven, editor, and Meinert, Curtis L., editor
- Published
- 2022
- Full Text
- View/download PDF
24. Implementation of multiple statistical methods to estimate variability and individual response to training.
- Author
-
Jacques, Macsue, Landen, Shanie, Romero, Javier Alvarez, Yan, Xu, Hiam, Danielle, Jones, Patrice, Gurd, Brendon, Eynon, Nir, and Voisin, Sarah
- Subjects
- *
AEROBIC exercises , *CONFIDENCE intervals , *EXERCISE physiology , *EXERCISE intensity , *RESEARCH funding , *STATISTICAL models , *HIGH-intensity interval training , *MEDICAL research - Abstract
Multiple statistical methods have been proposed to estimate individual responses to exercise training; yet, the evaluation of these methods is lacking. We compared five of these methods including the following: the use of a control group, a control period, repeated testing during an intervention, a reliability trial and a repeated intervention. Apparently healthy males from the Gene SMART study completed a 4-week control period, 4 weeks of High-Intensity Interval Training (HIIT), >1 year of washout, and then subsequently repeated the same 4 weeks of HIIT, followed by an additional 8 weeks of HIIT. Aerobic fitness measurements were measured in duplicates at each time point. We found that the control group and control period were not intended to measure the degree to which individuals responded to training, but rather estimated whether individual responses to training can be detected with the current exercise protocol. After a repeated intervention, individual responses to 4 weeks of HIIT were not consistent, whereas repeated testing during the 12-week-long intervention was able to capture individual responses to HIIT. The reliability trial should not be used to study individual responses, rather should be used to classify participants as responders with a certain level of confidence. 12 weeks of HIIT with repeated testing during the intervention is sufficient and cost-effective to measure individual responses to exercise training since it allows for a confident estimate of an individual's true response. Our study has significant implications for how to improve the design of exercise studies to accurately estimate individual responses to exercise training interventions. Highlights What are the findings? We implemented five statistical methods in a single study to estimate the magnitude of within-subject variability and quantify responses to exercise training at the individual level. The various proposed methods used to estimate individual responses to training provide different types of information and rely on different assumptions that are difficult to test. Within-subject variability is often large in magnitude, and as such, should be systematically evaluated and carefully considered in future studies to successfully estimate individual responses to training.How might it impact on clinical practice in the future? Within-subject variability in response to exercise training is a key factor that must be considered in order to obtain a reproducible measurement of individual responses to exercise training. This is akin to ensuring data are reproducible for each subject. Our findings provide guidelines for future exercise training studies to ensure results are reproducible within participants and to minimise wasting precious research resources. By implementing five suggested methods to estimate individual responses to training, we highlight their feasibility, strengths, weaknesses and costs, for researchers to make the best decision on how to accurately measure individual responses to exercise training. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Evaluation and Error Decomposition of IMERG Product Based on Multiple Satellite Sensors.
- Author
-
Li, Yunping, Zhang, Ke, Bardossy, Andras, Shen, Xiaoji, and Cheng, Yujia
- Subjects
- *
DETECTORS , *DECOMPOSITION method , *FALSE alarms , *REMOTE sensing , *SAMPLE size (Statistics) - Abstract
The Integrated Multisatellite Retrievals for GPM (IMERG) is designed to derive precipitation by merging data from all the passive microwave (PMW) and infrared (IR) sensors. While the input source errors originating from the PMW and IR sensors are important, their structure, characteristics, and algorithm improvement remain unclear. Our study utilized a four-component error decomposition (4CED) method and a systematic and random error decomposition method to evaluate the detectability of IMERG dataset and identify the precipitation errors based on the multi-sensors. The 30 min data from 30 precipitation stations in the Tunxi Watershed were used to evaluate the IMERG data from 2018 to 2020. The input source includes five types of PMW sensors and IR instruments. The results show that the sample ratio for IR (Morph, IR + Morph, and IR only) is much higher than that for PMW (AMSR2, SSMIS, GMI, MHS, and ATMS), with a ratio of 72.8% for IR sources and a ratio of 27.2% for PMW sources. The high false ratio of the IR sensor leads to poor detectability performance of the false alarm ratio (FAR, 0.5854), critical success index (CSI, 0.3014), and Brier score (BS, 0.1126). As for the 4CED, Morph and Morph + IR have a large magnitude of high total bias (TB), hit overestimate bias (HOB), hit underestimate bias (HUB), false bias (FB), and miss bias (MB), which is related to the prediction ability and sample size. In addition, systematic error is the prominent component for AMSR2, SSMIS, GMI, and Morph + IR, indicating some inherent error (retrieval algorithm) that needs to be removed. These findings can support improving the retrieval algorithm and reducing errors in the IMERG dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. 基于多目标优化克隆算法的巡检机器人 导航误差概率分析系统.
- Author
-
施俊龙
- Abstract
In order to solve the problems of low calculation accuracy, low probability analysis accuracy and low efficiency in traditional methods, a navigation error probability analysis system for inspection robot based on multi-objective optimization cloning algorithm was designed. The overall framework of the system consisted of network communication part, data processing and error probability analysis part and environment perception part. In the hardware design, the structure of embedded data processing board and network communication were optimized. In the software design, the navigation error probability analysis model of the inspection robot was established, and the multi-objective optimization cloning algorithm was used to solve the model to realize the analysis of the navigation error probability of the inspection robot. The experimental results showed that the system had higher calculation accuracy, higher error probability analysis accuracy and higher efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Illustration of 2 Fusion Designs and Estimators.
- Author
-
Cole, Stephen R, Edwards, Jessie K, Breskin, Alexander, Rosin, Samuel, Zivich, Paul N, Shook-Sa, Bonnie E, and Hudgens, Michael G
- Subjects
- *
EXPERIMENTAL design , *STATISTICS , *COMPUTER simulation , *CONFIDENCE intervals , *RESEARCH methodology , *RESEARCH funding , *STATISTICAL models , *DATA analysis , *MEASUREMENT errors , *SCIENTIFIC errors ,RESEARCH evaluation - Abstract
"Fusion" study designs combine data from different sources to answer questions that could not be answered (as well) by subsets of the data. Studies that augment main study data with validation data, as in measurement-error correction studies or generalizability studies, are examples of fusion designs. Fusion estimators, here solutions to stacked estimating functions, produce consistent answers to identified research questions using data from fusion designs. In this paper, we describe a pair of examples of fusion designs and estimators, one where we generalize a proportion to a target population and one where we correct measurement error in a proportion. For each case, we present an example motivated by human immunodeficiency virus research and summarize results from simulation studies. Simulations demonstrate that the fusion estimators provide approximately unbiased results with appropriate 95% confidence interval coverage. Fusion estimators can be used to appropriately combine data in answering important questions that benefit from multiple sources of information. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Test‐retest variability of plasma biomarkers in Alzheimer's disease and its effects on clinical prediction models.
- Author
-
Cullen, Nicholas C., Janelidze, Shorena, Mattsson‐Carlgren, Niklas, Palmqvist, Sebastian, Bittner, Tobias, Suridjan, Ivonne, Jethwa, Alexander, Kollmorgen, Gwendlyn, Brum, Wagner S., Zetterberg, Henrik, Blennow, Kaj, Stomrud, Erik, and Hansson, Oskar
- Abstract
Introduction: The effect of random error on the performance of blood‐based biomarkers for Alzheimer's disease (AD) must be determined before clinical implementation. Methods: We measured test‐retest variability of plasma amyloid beta (Aβ)42/Aβ40, neurofilament light (NfL), glial fibrillary acidic protein (GFAP), and phosphorylated tau (p‐tau)217 and simulated effects of this variability on biomarker performance when predicting either cerebrospinal fluid (CSF) Aβ status or conversion to AD dementia in 399 non‐demented participants with cognitive symptoms. Results: Clinical performance was highest when combining all biomarkers. Among single‐biomarkers, p‐tau217 performed best. Test‐retest variability ranged from 4.1% (Aβ42/Aβ40) to 25% (GFAP). This variability reduced the performance of the biomarkers (≈ΔAUC [area under the curve] −1% to −4%) with the least effects on models with p‐tau217. The percent of individuals with unstable predicted outcomes was lowest for the multi‐biomarker combination (14%). Discussion: Clinical prediction models combining plasma biomarkers—particularly p‐tau217—exhibit high performance and are less effected by random error. Individuals with unstable predicted outcomes ("gray zone") should be recommended for further tests. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A Random Error Suppression Method Based on IGWPSO-ELM for Micromachined Silicon Resonant Accelerometers.
- Author
-
Wang, Peng, Huang, Libin, Zhao, Liye, and Ding, Xukai
- Subjects
GREY Wolf Optimizer algorithm ,RANDOM walks ,MACHINE learning ,WHITE noise ,SILICON ,ACCELEROMETERS - Abstract
There are various errors in practical applications of micromachined silicon resonant accelerometers (MSRA), among which the composition of random errors is complex and uncertain. In order to improve the output accuracy of MSRA, this paper proposes an MSRA random error suppression method based on an improved grey wolf and particle swarm optimized extreme learning machine (IGWPSO-ELM). A modified wavelet threshold function is firstly used to separate the white noise from the useful signal. The output frequency at the previous sampling point and the sequence value are then added to the current output frequency to form a three-dimensional input. Additional improvements are made on the particle swarm optimized extreme learning machine (PSO-ELM): the grey wolf optimization (GWO) is fused into the algorithm and the three factors (inertia, acceleration and convergence) are non-linearized to improve the convergence efficiency and accuracy of the algorithm. The model trained offline using IGWPSO-ELM is applied to predicting compensation experiments, and the results show that the method is able to reduce velocity random walk from the original 4.3618 μg/√Hz to 2.1807 μg/√Hz, bias instability from the original 2.0248 μg to 1.3815 μg, and acceleration random walk from the original 0.53429 μg·√Hz to 0.43804 μg·√Hz, effectively suppressing the random error in the MSRA output. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Epidemiological Methods in Regulatory Toxicology
- Author
-
Ranft, Ulrich, Wellenius, Gregory A., Reichl, Franz-Xaver, editor, and Schwenk, Michael, editor
- Published
- 2021
- Full Text
- View/download PDF
31. The Application of Theoretical Variance#1 Method and Lifting Wavelet for Optic Gyroscopes
- Author
-
Xuwei, Cheng, Yuan, Li, Min, Zhou, Zitong, Yan, Can, Xie, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Jo, Kang-Hyun, editor, Li, Jianqiang, editor, Gribova, Valeriya, editor, and Hussain, Abir, editor
- Published
- 2021
- Full Text
- View/download PDF
32. Complementarity between Bayesian Internal Quality Control results management and External Quality Assessment bivariate z-score analysis: application to a concrete case study.
- Author
-
Jousselme E, Meijer P, Sobas F, and Tsiamyrtzis P
- Subjects
- Humans, Laboratories, Clinical standards, Laboratories, Clinical statistics & numerical data, Laboratories, Clinical organization & administration, Sensitivity and Specificity, Bayes Theorem, Quality Control, Quality Assurance, Health Care standards, Quality Assurance, Health Care methods, Quality Assurance, Health Care organization & administration
- Abstract
It is important that a clinical laboratory has implemented appropriate procedures for quality control, which includes both internal quality control (IQC) and external quality assessment (EQA) with the common goal to detect systematic errors and random errors. It is the case for both the Hemohub® Bayesian tools for IQC results interpretation and the ECAT EQA optimised bivariate z-scores analysis. On a concrete case study, we demonstrate both the higher sensitivity and specificity of optimised bivariate z-scores analysis than the univariate approach. The Bayesian IQC results interpretation like the ECAT analysis confirmed the explicit conclusion i.e. an increase of the random error corresponding to the increase of the inter assay coefficient of variation (CV) at the date of EQA samples runs. Improvement of repaired dysfunction could be then daily observed on IQC results and then confirmed on EQA results thanks to the complementarity of the two approaches.
- Published
- 2025
- Full Text
- View/download PDF
33. Evaluating UCE Data Adequacy and Integrating Uncertainty in a Comprehensive Phylogeny of Ants.
- Author
-
Borowiec ML, Zhang YM, Neves K, Ramalho MO, Fisher BL, Lucky A, and Moreau CS
- Abstract
While some relationships in phylogenomic studies have remained stable since the Sanger sequencing era, many challenging nodes remain, even with genome-scale data. Incongruence or lack of resolution in the phylogenomic era is frequently attributed to inadequate data modeling and analytical issues that lead to systematic biases. However, few studies investigate the potential for random error or establish expectations for the level of resolution achievable with a given empirical dataset and integrate uncertainties across methods when faced with conflicting results. Ants are the most species-rich lineage of social insects and one of the most ecologically important terrestrial animals. Consequently, ants have garnered significant research attention, including their systematics. Despite this, there has been no comprehensive genus-level phylogeny of the ants inferred using genomic data that thoroughly evaluates both signal strength and incongruence. In this study, we provide insight into and quantify uncertainty across the ant tree of life by utilizing the most taxonomically comprehensive Ultraconserved Elements dataset of ants to date, including 277 (81%) of recognized ant genera from all 16 extant subfamilies, and representing over 98% of described species. We use simulations to establish expectations for resolution, identify branches with less-than-expected concordance, and dissect the effects of data and model selection on recalcitrant nodes. Simulations show that hundreds of loci are needed to resolve recalcitrant nodes on our genus-level ant phylogeny. This demonstrates the continued role of random error in phylogenomic studies. Our analyses provide a comprehensive picture of support and incongruence across the ant phylogeny, while offering a more nuanced depiction of uncertainty and significantly expanding generic sampling. We use a consensus approach to integrate uncertainty across different analyses and find that assumptions about root age exert substantial influence on divergence dating. Our results suggest that advancing the understanding of ant phylogeny will require not only more data but also more refined phylogenetic models. We also provide a workflow for identifying under-supported nodes in concatenation analyses, outline a pragmatic way to reconcile conflicting results in phylogenomics, and introduce a user-friendly locus selection tool for divergence dating., (© The Author(s) 2025. Published by Oxford University Press on behalf of the Society of Systematic Biologists.)
- Published
- 2025
- Full Text
- View/download PDF
34. Potential impact of systematic and random errors in blood pressure measurement on the prevalence of high office blood pressure in the United States
- Author
-
Swati Sakhuja, Byron C. Jaeger, Oluwasegun P. Akinyelure, Adam P. Bress, Daichi Shimbo, Joseph E. Schwartz, Shakia T. Hardy, George Howard, Paul Drawz, and Paul Muntner
- Subjects
blood pressure ,measurement error ,misclassification ,random error ,Diseases of the circulatory (Cardiovascular) system ,RC666-701 - Abstract
Abstract The authors examined the proportion of US adults that would have their high blood pressure (BP) status changed if systolic BP (SBP) and diastolic BP (DBP) were measured with systematic bias and/or random error versus following a standardized protocol. Data from the 2017–2018 National Health and Nutrition Examination Survey (NHANES; n = 5176) were analyzed. BP was measured up to three times using a mercury sphygmomanometer by a trained physician following a standardized protocol and averaged. High BP was defined as SBP ≥130 mm Hg or DBP ≥80 mm Hg. Among US adults not taking antihypertensive medication, 32.0% (95%CI: 29.6%,34.4%) had high BP. If SBP and DBP were measured with systematic bias, 5 mm Hg for SBP and 3.5 mm Hg for DBP higher and lower than in NHANES, the proportion with high BP was estimated to be 44.4% (95%CI: 42.6%,46.2%) and 21.9% (95%CI 19.5%,24.4%). Among US adults taking antihypertensive medication, 60.6% (95%CI: 57.2%,63.9%) had high BP. If SBP and DBP were measured 5 and 3.5 mm Hg higher and lower than in NHANES, the proportion with high BP was estimated to be 71.8% (95%CI: 68.3%,75.0%) and 48.4% (95%CI: 44.6%,52.2%), respectively. If BP was measured with random error, with standard deviations of 15 mm Hg for SBP and 7 mm Hg for DBP, 21.4% (95%CI: 19.8%,23.0%) of US adults not taking antihypertensive medication and 20.5% (95%CI: 17.7%,23.3%) taking antihypertensive medication had their high BP status re‐categorized. In conclusions, measuring BP with systematic or random errors may result in the misclassification of high BP for a substantial proportion of US adults.
- Published
- 2022
- Full Text
- View/download PDF
35. Analysis of Spur Cylindrical Gear Pair Vibration Characteristic Considering Error and Randomness of Tooth Surface Friction
- Author
-
Zaida Gao, Shengbo Li, Shengping Fu, and Polyakov Roman
- Subjects
Random error ,Tooth surface friction ,Gear transmission ,Gear dynamics ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
Aiming at the problem of gear vibration characteristics with considering the error and randomness of tooth surface friction parameters are not clear. Combined with statistical methods and concentrated mass methods, the numerically study of the random characteristics of errors and tooth surface friction parameter is carried out. The influence of gear tooth errors on tooth surface friction parameters is analyzed. The bending-torsional coupled vibration model of a spur gear transmission is established in consideration of error and randomness of the tooth surface friction parameters. The fourth-order Runge-Kutta method is used for numerical solution, the vibration response of the gear transmission is obtained. The influences of gear error and the random tooth surface friction parameters on gear vibration responses are explored. The results show that the dynamic responses of the gear pair have more complicated randomness in the frequency domain and phase diagram under the collective effects of the error and the random tooth surface friction. However, error randomness interferes more with the dynamic stability of the gear system. And the research results provide a theoretical reference for the dynamic design of the gear transmission.
- Published
- 2022
- Full Text
- View/download PDF
36. ARRAY RADIATION PATTERN RECOVERY UNDER RANDOM ERRORS USING CLUSTERED LINEAR ARRAY
- Author
-
Ahmed J. Abdulqader, Raad H. Thaher, and Jafar R. Mohammed
- Subjects
linear array ,clustered array ,random error ,array pattern recovery ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
In practice, random errors in the excitations (amplitude and phase) of array elements cause undesired variations in the array patterns. In this paper, the clustered array elements with tapered amplitude excitations technique are introduced to reduce the impact of random weight errors and recover the desired patterns. The most beneficial feature of the suggested method is that it can be used in the design stage to count for any amplitude errors instantly. The cost function of the optimizer used is restricted to avoid any unwanted rises in sidelobe levels caused by unexpected perturbation errors. Furthermore, errors on element amplitude excitations are assumed to occur either randomly or sectionally (i.e., an error affecting only a subset of the array elements) through the entire array aperture. The validity of the proposed approach is entirely supported by simulation studies.
- Published
- 2022
- Full Text
- View/download PDF
37. Comparison of two measurement devices for obtaining horizontal force-velocity profile variables during sprint running.
- Author
-
Feser, Erin, Lindley, Kyle, Clark, Kenneth, Bezodis, Neil, Korfist, Christian, and Cronin, John
- Subjects
FOOTBALL training ,FOOTBALL ,STALKING ,RADAR ,VELOCITY - Abstract
This study established the magnitude of systematic bias and random error of horizontal force-velocity (F-v) profile variables obtained from a 1080 Sprint compared to that obtained from a Stalker ATS II radar device. Twenty high-school athletes from an American football training group completed a 30 m sprint while the two devices simultaneously measured velocity-time data. The velocity-time data were modelled by an exponential equation fitting process and then used to calculate individual F-v profiles and related variables (theoretical maximum velocity, theoretical maximum horizontal force, slope of the linear F-v profile, peak power, time constant tau, and horizontal maximal velocity). The devices were compared by determining the systematic bias and the 95% limits of agreement (random error) for all variables, both of which were expressed as percentages of the mean radar value. All bias values were within 6.32%, with the 1080 Sprint reporting higher values for tau, horizontal maximal velocity, and theoretical maximum velocity. Random error was lowest for velocity-based variables but exceeded 7% for all others, with slope of the F-v profile being greatest at ±12.3%. These results provide practitioners with the information necessary to determine if the agreement between the devices and the magnitude of random error is acceptable within the context of their specific application. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Influence of the Dynamic Goniometer's Bearing Unit on the Measurement Error.
- Author
-
Pavlov, P. A. and Ivashchenko, E. M.
- Subjects
- *
UNITS of measurement , *BALL bearings , *GONIOMETERS , *OPTICAL sensors , *ANGULAR measurements , *WAVELETS (Mathematics) , *MEASUREMENT errors - Abstract
In order to improve the accuracy of the dynamic goniometer, the influence of its bearing unit on the measurement error has been studied. The spectral characteristics of the random error of two dynamic goniometers with similar optical angle sensors and different bearings (air and angular contact ball bearings) have been studied. It is shown that the use of a ball bearing in the goniometer leads to nonstationarity of the random error of angular measurements. Using wavelet analysis and the Allan variation, the random error of the dynamic goniometer has been analyzed, the sources of nonstationarity have been identified, and its potential accuracy has been determined. It has been established that the use of an air bearing allows increasing the accuracy of a dynamic goniometer and bringing its value closer to the accuracy of a static goniometer, in which the influence of the bearing unit is minimized. The results obtained are of interest to users and developers of dynamic goniometers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Random Error
- Author
-
Maggino, Filomena, editor
- Published
- 2023
- Full Text
- View/download PDF
40. Accelerated gradient descent using improved Selective Backpropagation.
- Author
-
Hosseinali, Farzad
- Subjects
- *
PRODUCTION standards , *GENERALIZATION , *FORECASTING , *ALGORITHMS - Abstract
An improved version of the Selective Backpropagation (SBP+) is described which significantly reduces the training time and enhances the generalization. The algorithm eliminates correctly predicted instances from the Backpropagation (BP) in an iteration. The minimum useful content level to define a correct prediction is determined using the hyperparameter c. A new performance measure d m is introduced, which denotes the average of the fraction of remaining instances in the BP during an epoch. With a proper selection of c , a model can be adjusted to learn dominant patterns (mostly found in instances located further away from a decision hyperplane) and skip minor details (the very details which are due to the random error and can be found in instances close to a decision hyperplane). Since the SBP+ assigns a lower correct softmax score to instances located closer to a decision hyperplane, it is shown that — under certain conditions — a Neural Networks (NN) standard crisp output can be converted to a fuzzy membership scores set. The regulating effect of the content level c is compared with other standard techniques and its advantages are outlined. Lastly, the statistical process leading to minimization of the useful total error is explicated and the SBP+ ability to pinpoint outliers is illustrated with an example. • Improved Selective Backpropagation is described which accelerates training. • Acceleration is due to the elimination of accurate predictions from backpropagation. • Improvement in accuracy is due to the inclusion of random error term. • The minimum content to be learned from instances is determined with hyperparameter c. • A performance measure is introduced, denoting the fraction of remaining instances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Sampling Errors, Bias, and Objectivity
- Author
-
Flinton, David M. and Ramlaul, Aarthi, editor
- Published
- 2020
- Full Text
- View/download PDF
42. Effect of random phase error and baseline roll angle error on eddy identification by interferometric imaging altimeter.
- Author
-
Gao, Le, Sun, Hanwei, Qi, Jifeng, and Jiang, Qiufu
- Subjects
- *
INTERFEROMETRY , *ALTIMETERS , *OCEAN surface topography , *INFORMATION storage & retrieval systems , *SATELLITE launching ships - Abstract
To achieve better observation for sea surface, a new generation of wide-swath interferometric altimeter satellites is proposed. Before satellite launch, it is particularly important to study the data processing methods and carry out the detailed error analysis of ocean satellites, because it is directly related to the ultimate ability of satellites to capture ocean information. For this purpose, ocean eddies are considered a specific case of ocean signals, and it can cause significant changes in sea surface elevation. It is suitable for theoretical simulation of the sea surface and systematic simulation of the altimeter. We analyzed the impacts of random error and baseline error on the sea surface and ocean signals and proposed a combined strategy of low-pass filtering, empirical orthogonal function (EOF) decomposition, and linear fitting to remove the errors. Through this strategy, sea surface anomalies caused by errors were considerably improved, and the capability of satellite for capturing ocean information was enhanced. Notably, we found that the baseline error in sea surface height data was likely to cause inaccuracy in eddy boundary detection, as well as false eddy detection. These abnormalities could be prevented for "clean" sea surface height after the errors removal. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. La validez interna de los RCT: el caso de los errores aleatorios en nutrición.
- Author
-
López-Mas, Roberto
- Subjects
- *
RANDOMIZED controlled trials , *CLINICAL medicine research , *SCIENTIFIC method , *GOVERNMENT policy , *NUTRITION , *UNCERTAINTY , *HUMAN error - Abstract
RCTs are considered the gold standard in various evidence hierarchies for assessing the quality of scientific research, as well as for guiding regulations and public policy. In this article, the challenges of this methodology in nutrition science are identified by means of analyzing the difficulties that the reduction of random error presents. The minimization of random error could be impossible in a context in which large-scale RCTs are not feasible and meta-analyses are not an option. The conclusion is that guaranteeing the reliability of the evidence in nutrition RCTs can be a highly complex task. [ABSTRACT FROM AUTHOR]
- Published
- 2022
44. Szanse i iluzje dotyczące korzystania z dużych prób we wnioskowaniu statystycznym.
- Author
-
Szreder, Mirosław
- Abstract
Copyright of Polish Statistician / Wiadomości Statystyczne is the property of State Treasury - Statistics Poland and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
45. An alternative method for assessing the agreement between test results
- Abstract
The paper proposes a new way of assessing the agreement between measurement results during test quality assurance procedures in the laboratory. The decision-making rule is based on the measurement uncertainty. The probability, with which mathematical expectations of the measured data lie within the uncertainty of the indicator measurements, is proposed as a quantitative indicator. Such a quantitative indicator of the impact of methodological errors is proposed as the ratio of the difference between mathematical expectations of the measurement results obtained in different series of measurements to the average uncertainty of their determination, taking into account the applied decision-making rule. This indicator is based on the assumption that the measurement results are normally distributed. To simplify the process of calculating quality assurance indicators and reduce the risk of making mistakes in calculations, the “Agreement” module was created on the basis of a virtual test laboratory. The input data for the calculation of the agreement in the module are: measuring equipment, measurement results themselves (at least three for each tester), the coverage coefficient, full names of the people performing the measurements, and the coefficient of the decision-making rule.
- Published
- 2024
46. Prediction skill of tropical synoptic scale transients from ECMWF and NCEP ensemble prediction systems
- Author
-
Landu, Kiranmayi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Indian Institute of Technology Bhubaneshwar, Bhubaneshwar (India)]
- Published
- 2016
- Full Text
- View/download PDF
47. An Advanced Framework for Merging Remotely Sensed Soil Moisture Products at the Regional Scale Supported by Error Structure Analysis: A Case Study on the Tibetan Plateau
- Author
-
Jian Kang, Rui Jin, and Xin Li
- Subjects
Data fusion ,error decomposition ,random error ,remote sensing product ,soil moisture (SM) ,systemic error ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Data fusion can effectively improve the accuracy of remotely sensed (RS) soil moisture (SM) products. Understanding the error structures of RS SM products is beneficial for formulating a data fusion scheme. In this article, a data fusion scheme is examined on the Tibetan Plateau, and the Soil Moisture Active Passive mission, Soil Moisture and Ocean Salinity mission, and Advanced Microwave Scanning Radiometer 2 products are used as the experimental input datasets. The RS apparent thermal inertia (ATI) is transformed into SM values as the reference data with reliable systemic variability. The ATI-based SM, along with three RS SM products, is introduced into the triple collocation (TC) method to decompose the errors of the three RS SM products into systemic and random errors at each RS pixel. Due to the presence of systemic errors, the temporal mean values and amplitudes of the three RS SM products were calibrated by those of the ATI-based SM. The rescaled anomalies (including amplitude and random error) were merged according to their random errors estimated by the TC method, and then the merged anomalies were added to the temporal mean values of the ATI-based SM to obtain the final merged results. Compared with the merged European Space Agency Climate Change Initiative passive SM product and input SM datasets, the merged results in this article exhibit optimal accuracy. The scheme for merging RS SM products shows high data fusion performance and can be further considered a reliable way to obtain a high-quality merged RS SM dataset.
- Published
- 2021
- Full Text
- View/download PDF
48. High quality (certainty) evidence changes less often than low‐quality evidence, but the magnitude of effect size does not systematically differ between studies with low versus high‐quality evidence.
- Author
-
Djulbegovic, Benjamin, Ahmed, Muhammad Muneeb, Hozo, Iztok, Koletsi, Despina, Hemkens, Lars, Price, Amy, Riera, Rachel, Nadanovsky, Paulo, dos Santos, Ana Paula Pires, Melo, Daniela, Pathak, Ranjan, Pacheco, Rafael Leite, Fontes, Luis Eduardo, Miranda, Enderson, and Nunan, David
- Subjects
- *
MEDICAL databases , *INFERENTIAL statistics , *CONFIDENCE intervals , *EFFECT sizes (Statistics) , *SYSTEMATIC reviews , *RESEARCH methodology , *EVIDENCE-based medicine , *DESCRIPTIVE statistics , *ODDS ratio , *SCIENTIFIC errors - Abstract
Rationale, Aims, and Objectives: It is generally believed that evidence from low quality of evidence generate inaccurate estimates about treatment effects more often than evidence from high (certainty) quality evidence (CoE). As a result, we would expect that (a) estimates of effects of health interventions initially based on high CoE change less frequently than the effects estimated by lower CoE (b) the estimates of magnitude of effect size differ between high and low CoE. Empirical assessment of these foundational principles of evidence‐based medicine has been lacking. Methods: We reviewed the Cochrane Database of Systematic Reviews from January 2016 through May 2021 for pairs of original and updated reviews for change in CoE assessments based on the Grading of Recommendations Assessment, Development and Evaluation (GRADE) method. We assessed the difference in effect sizes between the original versus updated reviews as a function of change in CoE, which we report as a ratio of odds ratio (ROR). We compared ROR generated in the studies in which CoE changed from very low/low (VL/L) to moderate/high (M/H) versus M/H to VL/L. Heterogeneity and inconsistency were assessed using the tau and I2 statistic. We also assessed the change in precision of effect estimates (by calculating the ratio of standard errors) (seR), and the absolute deviation in estimates of treatment effects (aROR). Results: Four hundred and nineteen pairs of reviews were included of which 414 (207 × 2) informed the CoE appraisal and 384 (192 × 2) the assessment of effect size. We found that CoE originally appraised as VL/L had 2.1 [95% confidence interval (CI): 1.19–4.12; p = 0.0091] times higher odds to be changed in the future studies than M/H CoE. However, the effect size was not different (p = 1) when CoE changed from VL/L → M/H [ROR = 1.02 (95% CI: 0.74–1.39)] compared with M/H → VL/L (ROR = 1.02 [95% CI: 0.44–2.37]). Similar overlap in aROR between the VL/L → M/H versus M/H → VL/L subgroups was observed [median (IQR): 1.12 (1.07–1.57) vs. 1.21 (1.12–2.43)]. We observed large inconsistency across ROR estimates (I2 = 99%). There was larger imprecision in treatment effects when CoE changed from VL/L → M/H (seR = 1.46) than when it changed from M/H → VL/L (seR = 0.72). Conclusions: We found that low‐quality evidence changes more often than high CoE. However, the effect size did not systematically differ between the studies with low versus high CoE. The finding that the effect size did not differ between low and high CoE indicate urgent need to refine current EBM critical appraisal methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. From Noise to Bias: Overconfidence in New Product Forecasting.
- Author
-
Feiler, Daniel and Tong, Jordan
- Subjects
NEW product development ,FORECASTING ,NOISE ,PRODUCT launches ,OPERATIONS management - Abstract
We study decision behavior in the selection, forecasting, and production for a new product. In a stylized behavioral model and five experiments, we generate new insight into when and why this combination of tasks can lead to overconfidence (specifically, overestimating the demand). We theorize that cognitive limitations lead to noisy interpretations of signal information, which itself is noisy. Because people are statistically naive, they directly use their noisy interpretation of the signal information as their forecast, thereby underaccounting for the uncertainty that underlies it. This process leads to unbiased forecast errors when considering products in isolation, but leads to positively biased forecasts for the products people choose to launch due to a selection effect. We show that this selection-driven overconfidence can be sufficiently problematic that, under certain conditions, choosing the product randomly can actually yield higher profits than when individuals themselves choose the product to launch. We provide mechanism evidence by manipulating the interpretation noise through information complexity—showing that even when the information is equivalent from a Bayesian perspective, more complicated information leads to more noise, which, in turn, leads to more overconfidence in the chosen products. Finally, we leverage this insight to show that getting a second independent forecast for a chosen product can significantly mitigate the overconfidence problem, even when both individuals have the same information. This paper was accepted by Charles Corbett, operations management. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. Epidemiology
- Author
-
McBride, Kate A., Ogbo, Felix, Page, Andrew, and Liamputtong, Pranee, editor
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.