1,118 results on '"Random error"'
Search Results
2. Global Navigation Satellite System Receiver Positioning in Harsh Environments via Clock Bias Prediction by Empirical Mode Decomposition and Back Propagation Neural Network Method.
- Author
-
Du, Libin, Chen, Hao, Yuan, Yibo, Song, Longjiang, and Meng, Xiangqian
- Subjects
- *
GLOBAL Positioning System , *HILBERT-Huang transform , *STATISTICAL bias , *BACK propagation , *GPS receivers - Abstract
This paper proposes a novel method to improve the clock bias short-term prediction accuracy of navigation receivers then solve the problem of low positioning accuracy when the satellite signal quality deteriorates. Considering that the clock bias of a navigation receiver is equivalent to a virtual satellite, the predicted value of clock bias is used to assist navigation receivers in positioning. Consequently, a combined prediction method for navigation receiver clock bias based on Empirical Mode Decomposition (EMD) and Back Propagation Neural Network (BPNN) analysis theory is demonstrated. In view of systematic errors and random errors in the clock bias data from navigation receivers, the EMD method is used to decompose the clock bias data; then, the BPNN prediction method is used to establish a high-precision clock bias prediction model; finally, based on the clock bias prediction value, the three-dimensional positioning of the navigation receiver is realized by expanding the observation equation. The experimental results show that the proposed model is suitable for clock bias time series prediction and providing three-dimensional positioning information meets the requirements of navigation application in the harsh environment of only three satellites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Global Navigation Satellite System Receiver Positioning in Harsh Environments via Clock Bias Prediction by Empirical Mode Decomposition and Back Propagation Neural Network Method
- Author
-
Libin Du, Hao Chen, Yibo Yuan, Longjiang Song, and Xiangqian Meng
- Subjects
receiver clock bias forecasting ,random error ,clock bias auxiliary positioning algorithm ,satellite navigation ,Chemical technology ,TP1-1185 - Abstract
This paper proposes a novel method to improve the clock bias short-term prediction accuracy of navigation receivers then solve the problem of low positioning accuracy when the satellite signal quality deteriorates. Considering that the clock bias of a navigation receiver is equivalent to a virtual satellite, the predicted value of clock bias is used to assist navigation receivers in positioning. Consequently, a combined prediction method for navigation receiver clock bias based on Empirical Mode Decomposition (EMD) and Back Propagation Neural Network (BPNN) analysis theory is demonstrated. In view of systematic errors and random errors in the clock bias data from navigation receivers, the EMD method is used to decompose the clock bias data; then, the BPNN prediction method is used to establish a high-precision clock bias prediction model; finally, based on the clock bias prediction value, the three-dimensional positioning of the navigation receiver is realized by expanding the observation equation. The experimental results show that the proposed model is suitable for clock bias time series prediction and providing three-dimensional positioning information meets the requirements of navigation application in the harsh environment of only three satellites.
- Published
- 2024
- Full Text
- View/download PDF
4. Improving the Accuracy of TanDEM-X Digital Elevation Model Using Least Squares Collocation Method.
- Author
-
Shen, Xingdong, Zhou, Cui, and Zhu, Jianjun
- Subjects
- *
LEAST squares , *DIGITAL elevation models , *STANDARD deviations , *BACK propagation - Abstract
The TanDEM-X Digital Elevation Model (DEM) is limited by the radar side-view imaging mode, which still has gaps and anomalies that directly affect the application potential of the data. Many methods have been used to improve the accuracy of TanDEM-X DEM, but these algorithms primarily focus on eliminating systematic errors trending over a large area in the DEM, rather than random errors. Therefore, this paper presents the least-squares collocation-based error correction algorithm (LSC-TXC) for TanDEM-X DEM, which effectively eliminates both systematic and random errors, to enhance the accuracy of TanDEM-X DEM. The experimental results demonstrate that TanDEM-X DEM corrected by the LSC-TXC algorithm reduces the root mean square error (RMSE) from 6.141 m to 3.851 m, resulting in a significant improvement in accuracy (by 37.3%). Compared to three conventional algorithms, namely Random Forest, Height Difference Fitting Neural Network and Back Propagation in Neural Network, the presented algorithm demonstrates a reduction in the RMSEs of the corrected TanDEM-X DEMs by 6.5%, 7.6%, and 18.1%, respectively. This algorithm provides an efficient tool for correcting DEMs such as TanDEM-X for a wide range of areas. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Evaluation and Error Decomposition of IMERG Product Based on Multiple Satellite Sensors.
- Author
-
Li, Yunping, Zhang, Ke, Bardossy, Andras, Shen, Xiaoji, and Cheng, Yujia
- Subjects
- *
DETECTORS , *DECOMPOSITION method , *FALSE alarms , *REMOTE sensing , *SAMPLE size (Statistics) - Abstract
The Integrated Multisatellite Retrievals for GPM (IMERG) is designed to derive precipitation by merging data from all the passive microwave (PMW) and infrared (IR) sensors. While the input source errors originating from the PMW and IR sensors are important, their structure, characteristics, and algorithm improvement remain unclear. Our study utilized a four-component error decomposition (4CED) method and a systematic and random error decomposition method to evaluate the detectability of IMERG dataset and identify the precipitation errors based on the multi-sensors. The 30 min data from 30 precipitation stations in the Tunxi Watershed were used to evaluate the IMERG data from 2018 to 2020. The input source includes five types of PMW sensors and IR instruments. The results show that the sample ratio for IR (Morph, IR + Morph, and IR only) is much higher than that for PMW (AMSR2, SSMIS, GMI, MHS, and ATMS), with a ratio of 72.8% for IR sources and a ratio of 27.2% for PMW sources. The high false ratio of the IR sensor leads to poor detectability performance of the false alarm ratio (FAR, 0.5854), critical success index (CSI, 0.3014), and Brier score (BS, 0.1126). As for the 4CED, Morph and Morph + IR have a large magnitude of high total bias (TB), hit overestimate bias (HOB), hit underestimate bias (HUB), false bias (FB), and miss bias (MB), which is related to the prediction ability and sample size. In addition, systematic error is the prominent component for AMSR2, SSMIS, GMI, and Morph + IR, indicating some inherent error (retrieval algorithm) that needs to be removed. These findings can support improving the retrieval algorithm and reducing errors in the IMERG dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Illustration of 2 Fusion Designs and Estimators.
- Author
-
Cole, Stephen R, Edwards, Jessie K, Breskin, Alexander, Rosin, Samuel, Zivich, Paul N, Shook-Sa, Bonnie E, and Hudgens, Michael G
- Subjects
- *
EXPERIMENTAL design , *STATISTICS , *COMPUTER simulation , *CONFIDENCE intervals , *RESEARCH methodology , *RESEARCH funding , *STATISTICAL models , *DATA analysis , *MEASUREMENT errors , *SCIENTIFIC errors ,RESEARCH evaluation - Abstract
"Fusion" study designs combine data from different sources to answer questions that could not be answered (as well) by subsets of the data. Studies that augment main study data with validation data, as in measurement-error correction studies or generalizability studies, are examples of fusion designs. Fusion estimators, here solutions to stacked estimating functions, produce consistent answers to identified research questions using data from fusion designs. In this paper, we describe a pair of examples of fusion designs and estimators, one where we generalize a proportion to a target population and one where we correct measurement error in a proportion. For each case, we present an example motivated by human immunodeficiency virus research and summarize results from simulation studies. Simulations demonstrate that the fusion estimators provide approximately unbiased results with appropriate 95% confidence interval coverage. Fusion estimators can be used to appropriately combine data in answering important questions that benefit from multiple sources of information. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. A Random Error Suppression Method Based on IGWPSO-ELM for Micromachined Silicon Resonant Accelerometers.
- Author
-
Wang, Peng, Huang, Libin, Zhao, Liye, and Ding, Xukai
- Subjects
GREY Wolf Optimizer algorithm ,RANDOM walks ,MACHINE learning ,WHITE noise ,SILICON ,ACCELEROMETERS - Abstract
There are various errors in practical applications of micromachined silicon resonant accelerometers (MSRA), among which the composition of random errors is complex and uncertain. In order to improve the output accuracy of MSRA, this paper proposes an MSRA random error suppression method based on an improved grey wolf and particle swarm optimized extreme learning machine (IGWPSO-ELM). A modified wavelet threshold function is firstly used to separate the white noise from the useful signal. The output frequency at the previous sampling point and the sequence value are then added to the current output frequency to form a three-dimensional input. Additional improvements are made on the particle swarm optimized extreme learning machine (PSO-ELM): the grey wolf optimization (GWO) is fused into the algorithm and the three factors (inertia, acceleration and convergence) are non-linearized to improve the convergence efficiency and accuracy of the algorithm. The model trained offline using IGWPSO-ELM is applied to predicting compensation experiments, and the results show that the method is able to reduce velocity random walk from the original 4.3618 μg/√Hz to 2.1807 μg/√Hz, bias instability from the original 2.0248 μg to 1.3815 μg, and acceleration random walk from the original 0.53429 μg·√Hz to 0.43804 μg·√Hz, effectively suppressing the random error in the MSRA output. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Potential impact of systematic and random errors in blood pressure measurement on the prevalence of high office blood pressure in the United States
- Author
-
Swati Sakhuja, Byron C. Jaeger, Oluwasegun P. Akinyelure, Adam P. Bress, Daichi Shimbo, Joseph E. Schwartz, Shakia T. Hardy, George Howard, Paul Drawz, and Paul Muntner
- Subjects
blood pressure ,measurement error ,misclassification ,random error ,Diseases of the circulatory (Cardiovascular) system ,RC666-701 - Abstract
Abstract The authors examined the proportion of US adults that would have their high blood pressure (BP) status changed if systolic BP (SBP) and diastolic BP (DBP) were measured with systematic bias and/or random error versus following a standardized protocol. Data from the 2017–2018 National Health and Nutrition Examination Survey (NHANES; n = 5176) were analyzed. BP was measured up to three times using a mercury sphygmomanometer by a trained physician following a standardized protocol and averaged. High BP was defined as SBP ≥130 mm Hg or DBP ≥80 mm Hg. Among US adults not taking antihypertensive medication, 32.0% (95%CI: 29.6%,34.4%) had high BP. If SBP and DBP were measured with systematic bias, 5 mm Hg for SBP and 3.5 mm Hg for DBP higher and lower than in NHANES, the proportion with high BP was estimated to be 44.4% (95%CI: 42.6%,46.2%) and 21.9% (95%CI 19.5%,24.4%). Among US adults taking antihypertensive medication, 60.6% (95%CI: 57.2%,63.9%) had high BP. If SBP and DBP were measured 5 and 3.5 mm Hg higher and lower than in NHANES, the proportion with high BP was estimated to be 71.8% (95%CI: 68.3%,75.0%) and 48.4% (95%CI: 44.6%,52.2%), respectively. If BP was measured with random error, with standard deviations of 15 mm Hg for SBP and 7 mm Hg for DBP, 21.4% (95%CI: 19.8%,23.0%) of US adults not taking antihypertensive medication and 20.5% (95%CI: 17.7%,23.3%) taking antihypertensive medication had their high BP status re‐categorized. In conclusions, measuring BP with systematic or random errors may result in the misclassification of high BP for a substantial proportion of US adults.
- Published
- 2022
- Full Text
- View/download PDF
9. Analysis of Spur Cylindrical Gear Pair Vibration Characteristic Considering Error and Randomness of Tooth Surface Friction
- Author
-
Zaida Gao, Shengbo Li, Shengping Fu, and Polyakov Roman
- Subjects
Random error ,Tooth surface friction ,Gear transmission ,Gear dynamics ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
Aiming at the problem of gear vibration characteristics with considering the error and randomness of tooth surface friction parameters are not clear. Combined with statistical methods and concentrated mass methods, the numerically study of the random characteristics of errors and tooth surface friction parameter is carried out. The influence of gear tooth errors on tooth surface friction parameters is analyzed. The bending-torsional coupled vibration model of a spur gear transmission is established in consideration of error and randomness of the tooth surface friction parameters. The fourth-order Runge-Kutta method is used for numerical solution, the vibration response of the gear transmission is obtained. The influences of gear error and the random tooth surface friction parameters on gear vibration responses are explored. The results show that the dynamic responses of the gear pair have more complicated randomness in the frequency domain and phase diagram under the collective effects of the error and the random tooth surface friction. However, error randomness interferes more with the dynamic stability of the gear system. And the research results provide a theoretical reference for the dynamic design of the gear transmission.
- Published
- 2022
- Full Text
- View/download PDF
10. ARRAY RADIATION PATTERN RECOVERY UNDER RANDOM ERRORS USING CLUSTERED LINEAR ARRAY
- Author
-
Ahmed J. Abdulqader, Raad H. Thaher, and Jafar R. Mohammed
- Subjects
linear array ,clustered array ,random error ,array pattern recovery ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
In practice, random errors in the excitations (amplitude and phase) of array elements cause undesired variations in the array patterns. In this paper, the clustered array elements with tapered amplitude excitations technique are introduced to reduce the impact of random weight errors and recover the desired patterns. The most beneficial feature of the suggested method is that it can be used in the design stage to count for any amplitude errors instantly. The cost function of the optimizer used is restricted to avoid any unwanted rises in sidelobe levels caused by unexpected perturbation errors. Furthermore, errors on element amplitude excitations are assumed to occur either randomly or sectionally (i.e., an error affecting only a subset of the array elements) through the entire array aperture. The validity of the proposed approach is entirely supported by simulation studies.
- Published
- 2022
- Full Text
- View/download PDF
11. Szanse i iluzje dotyczące korzystania z dużych prób we wnioskowaniu statystycznym.
- Author
-
Szreder, Mirosław
- Abstract
Copyright of Polish Statistician / Wiadomości Statystyczne is the property of State Treasury - Statistics Poland and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
12. Quality Control in RT-PCR Viral Load Assays: Evaluation of Analytical Performance for HIV, HBV, and HCV.
- Author
-
Gomes GTA, Lima EG, de Oliveira Dos Santos VT, Araújo LMSB, and Porto GMR
- Abstract
Introduction: Quality Control Management (QCM) in clinical laboratories is crucial for ensuring reliable results in analytical measurements, with biological variation being a key factor. The study focuses on assessing the analytical performance of the Reverse Transcription Polymerase Chain Reaction (RT-PCR) system for Human Immunodeficiency Virus (HIV), Hepatitis B (HBV), and Hepatitis C (HCV). Five models proposed between 1999 and 2014 offer different approaches to evaluating analytical quality, with Model 2 based on biological variation and Model 5 considering the current state of the art. The study evaluates the RT-PCR system's analytical performance through Internal Quality Control (IQC) and External Quality Control (EQC)., Materials and Methods: The Laboratório Central de Saúde Pública do Estado do Ceará (LACEN-CE) conducted daily IQC using commercial kits, and EQC was performed through proficiency testing rounds. Random error, systematic error, and total error were determined for each analyte., Results: Analytical performance, assessed through CV and random error, met specifications, with HIV and HBV classified as "desirable" and "optimal." EQC results indicated low systematic error, contributing to total errors considered clinically insignificant., Conclusion: The study highlights the challenge of defining analytical specifications without sufficient biological variability data. Model 5 is deemed the most suitable. The analytical performance of the RT-PCR system for HIV, HBV, and HCV at LACEN-CE demonstrated satisfactory, emphasizing the importance of continuous quality control in molecular biology methodologies., (Copyright © 2024 International Federation of Clinical Chemistry and Laboratory Medicine (IFCC). All rights reserved.)
- Published
- 2024
13. Improving the Accuracy of TanDEM-X Digital Elevation Model Using Least Squares Collocation Method
- Author
-
Xingdong Shen, Cui Zhou, and Jianjun Zhu
- Subjects
least squares collocation method ,systematic error ,random error ,TanDEM-X DEM ,ICESat-2 ,Science - Abstract
The TanDEM-X Digital Elevation Model (DEM) is limited by the radar side-view imaging mode, which still has gaps and anomalies that directly affect the application potential of the data. Many methods have been used to improve the accuracy of TanDEM-X DEM, but these algorithms primarily focus on eliminating systematic errors trending over a large area in the DEM, rather than random errors. Therefore, this paper presents the least-squares collocation-based error correction algorithm (LSC-TXC) for TanDEM-X DEM, which effectively eliminates both systematic and random errors, to enhance the accuracy of TanDEM-X DEM. The experimental results demonstrate that TanDEM-X DEM corrected by the LSC-TXC algorithm reduces the root mean square error (RMSE) from 6.141 m to 3.851 m, resulting in a significant improvement in accuracy (by 37.3%). Compared to three conventional algorithms, namely Random Forest, Height Difference Fitting Neural Network and Back Propagation in Neural Network, the presented algorithm demonstrates a reduction in the RMSEs of the corrected TanDEM-X DEMs by 6.5%, 7.6%, and 18.1%, respectively. This algorithm provides an efficient tool for correcting DEMs such as TanDEM-X for a wide range of areas.
- Published
- 2023
- Full Text
- View/download PDF
14. From Noise to Bias: Overconfidence in New Product Forecasting.
- Author
-
Feiler, Daniel and Tong, Jordan
- Subjects
NEW product development ,FORECASTING ,NOISE ,PRODUCT launches ,OPERATIONS management - Abstract
We study decision behavior in the selection, forecasting, and production for a new product. In a stylized behavioral model and five experiments, we generate new insight into when and why this combination of tasks can lead to overconfidence (specifically, overestimating the demand). We theorize that cognitive limitations lead to noisy interpretations of signal information, which itself is noisy. Because people are statistically naive, they directly use their noisy interpretation of the signal information as their forecast, thereby underaccounting for the uncertainty that underlies it. This process leads to unbiased forecast errors when considering products in isolation, but leads to positively biased forecasts for the products people choose to launch due to a selection effect. We show that this selection-driven overconfidence can be sufficiently problematic that, under certain conditions, choosing the product randomly can actually yield higher profits than when individuals themselves choose the product to launch. We provide mechanism evidence by manipulating the interpretation noise through information complexity—showing that even when the information is equivalent from a Bayesian perspective, more complicated information leads to more noise, which, in turn, leads to more overconfidence in the chosen products. Finally, we leverage this insight to show that getting a second independent forecast for a chosen product can significantly mitigate the overconfidence problem, even when both individuals have the same information. This paper was accepted by Charles Corbett, operations management. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Accelerated gradient descent using improved Selective Backpropagation.
- Author
-
Hosseinali, Farzad
- Subjects
- *
PRODUCTION standards , *GENERALIZATION , *FORECASTING , *ALGORITHMS - Abstract
An improved version of the Selective Backpropagation (SBP+) is described which significantly reduces the training time and enhances the generalization. The algorithm eliminates correctly predicted instances from the Backpropagation (BP) in an iteration. The minimum useful content level to define a correct prediction is determined using the hyperparameter c. A new performance measure d m is introduced, which denotes the average of the fraction of remaining instances in the BP during an epoch. With a proper selection of c , a model can be adjusted to learn dominant patterns (mostly found in instances located further away from a decision hyperplane) and skip minor details (the very details which are due to the random error and can be found in instances close to a decision hyperplane). Since the SBP+ assigns a lower correct softmax score to instances located closer to a decision hyperplane, it is shown that — under certain conditions — a Neural Networks (NN) standard crisp output can be converted to a fuzzy membership scores set. The regulating effect of the content level c is compared with other standard techniques and its advantages are outlined. Lastly, the statistical process leading to minimization of the useful total error is explicated and the SBP+ ability to pinpoint outliers is illustrated with an example. • Improved Selective Backpropagation is described which accelerates training. • Acceleration is due to the elimination of accurate predictions from backpropagation. • Improvement in accuracy is due to the inclusion of random error term. • The minimum content to be learned from instances is determined with hyperparameter c. • A performance measure is introduced, denoting the fraction of remaining instances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Prediction skill of tropical synoptic scale transients from ECMWF and NCEP ensemble prediction systems
- Author
-
Landu, Kiranmayi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Indian Institute of Technology Bhubaneshwar, Bhubaneshwar (India)]
- Published
- 2016
- Full Text
- View/download PDF
17. Evaluation and Error Decomposition of IMERG Product Based on Multiple Satellite Sensors
- Author
-
Yunping Li, Ke Zhang, Andras Bardossy, Xiaoji Shen, and Yujia Cheng
- Subjects
multi-satellite remote sensing precipitation ,IMERG ,multiple satellite sensors ,4CED ,systematic error ,random error ,Science - Abstract
The Integrated Multisatellite Retrievals for GPM (IMERG) is designed to derive precipitation by merging data from all the passive microwave (PMW) and infrared (IR) sensors. While the input source errors originating from the PMW and IR sensors are important, their structure, characteristics, and algorithm improvement remain unclear. Our study utilized a four-component error decomposition (4CED) method and a systematic and random error decomposition method to evaluate the detectability of IMERG dataset and identify the precipitation errors based on the multi-sensors. The 30 min data from 30 precipitation stations in the Tunxi Watershed were used to evaluate the IMERG data from 2018 to 2020. The input source includes five types of PMW sensors and IR instruments. The results show that the sample ratio for IR (Morph, IR + Morph, and IR only) is much higher than that for PMW (AMSR2, SSMIS, GMI, MHS, and ATMS), with a ratio of 72.8% for IR sources and a ratio of 27.2% for PMW sources. The high false ratio of the IR sensor leads to poor detectability performance of the false alarm ratio (FAR, 0.5854), critical success index (CSI, 0.3014), and Brier score (BS, 0.1126). As for the 4CED, Morph and Morph + IR have a large magnitude of high total bias (TB), hit overestimate bias (HOB), hit underestimate bias (HUB), false bias (FB), and miss bias (MB), which is related to the prediction ability and sample size. In addition, systematic error is the prominent component for AMSR2, SSMIS, GMI, and Morph + IR, indicating some inherent error (retrieval algorithm) that needs to be removed. These findings can support improving the retrieval algorithm and reducing errors in the IMERG dataset.
- Published
- 2023
- Full Text
- View/download PDF
18. A Random Error Suppression Method Based on IGWPSO-ELM for Micromachined Silicon Resonant Accelerometers
- Author
-
Peng Wang, Libin Huang, Liye Zhao, and Xukai Ding
- Subjects
MSRA ,random error ,ELM ,IGWPSO ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
There are various errors in practical applications of micromachined silicon resonant accelerometers (MSRA), among which the composition of random errors is complex and uncertain. In order to improve the output accuracy of MSRA, this paper proposes an MSRA random error suppression method based on an improved grey wolf and particle swarm optimized extreme learning machine (IGWPSO-ELM). A modified wavelet threshold function is firstly used to separate the white noise from the useful signal. The output frequency at the previous sampling point and the sequence value are then added to the current output frequency to form a three-dimensional input. Additional improvements are made on the particle swarm optimized extreme learning machine (PSO-ELM): the grey wolf optimization (GWO) is fused into the algorithm and the three factors (inertia, acceleration and convergence) are non-linearized to improve the convergence efficiency and accuracy of the algorithm. The model trained offline using IGWPSO-ELM is applied to predicting compensation experiments, and the results show that the method is able to reduce velocity random walk from the original 4.3618 μg/√Hz to 2.1807 μg/√Hz, bias instability from the original 2.0248 μg to 1.3815 μg, and acceleration random walk from the original 0.53429 μg·√Hz to 0.43804 μg·√Hz, effectively suppressing the random error in the MSRA output.
- Published
- 2023
- Full Text
- View/download PDF
19. Potential impact of systematic and random errors in blood pressure measurement on the prevalence of high office blood pressure in the United States.
- Author
-
Sakhuja, Swati, Jaeger, Byron C., Akinyelure, Oluwasegun P., Bress, Adam P., Shimbo, Daichi, Schwartz, Joseph E., Hardy, Shakia T., Howard, George, Drawz, Paul, and Muntner, Paul
- Abstract
The authors examined the proportion of US adults that would have their high blood pressure (BP) status changed if systolic BP (SBP) and diastolic BP (DBP) were measured with systematic bias and/or random error versus following a standardized protocol. Data from the 2017–2018 National Health and Nutrition Examination Survey (NHANES; n = 5176) were analyzed. BP was measured up to three times using a mercury sphygmomanometer by a trained physician following a standardized protocol and averaged. High BP was defined as SBP ≥130 mm Hg or DBP ≥80 mm Hg. Among US adults not taking antihypertensive medication, 32.0% (95%CI: 29.6%,34.4%) had high BP. If SBP and DBP were measured with systematic bias, 5 mm Hg for SBP and 3.5 mm Hg for DBP higher and lower than in NHANES, the proportion with high BP was estimated to be 44.4% (95%CI: 42.6%,46.2%) and 21.9% (95%CI 19.5%,24.4%). Among US adults taking antihypertensive medication, 60.6% (95%CI: 57.2%,63.9%) had high BP. If SBP and DBP were measured 5 and 3.5 mm Hg higher and lower than in NHANES, the proportion with high BP was estimated to be 71.8% (95%CI: 68.3%,75.0%) and 48.4% (95%CI: 44.6%,52.2%), respectively. If BP was measured with random error, with standard deviations of 15 mm Hg for SBP and 7 mm Hg for DBP, 21.4% (95%CI: 19.8%,23.0%) of US adults not taking antihypertensive medication and 20.5% (95%CI: 17.7%,23.3%) taking antihypertensive medication had their high BP status re‐categorized. In conclusions, measuring BP with systematic or random errors may result in the misclassification of high BP for a substantial proportion of US adults. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. An Advanced Framework for Merging Remotely Sensed Soil Moisture Products at the Regional Scale Supported by Error Structure Analysis: A Case Study on the Tibetan Plateau
- Author
-
Jian Kang, Rui Jin, and Xin Li
- Subjects
Data fusion ,error decomposition ,random error ,remote sensing product ,soil moisture (SM) ,systemic error ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Data fusion can effectively improve the accuracy of remotely sensed (RS) soil moisture (SM) products. Understanding the error structures of RS SM products is beneficial for formulating a data fusion scheme. In this article, a data fusion scheme is examined on the Tibetan Plateau, and the Soil Moisture Active Passive mission, Soil Moisture and Ocean Salinity mission, and Advanced Microwave Scanning Radiometer 2 products are used as the experimental input datasets. The RS apparent thermal inertia (ATI) is transformed into SM values as the reference data with reliable systemic variability. The ATI-based SM, along with three RS SM products, is introduced into the triple collocation (TC) method to decompose the errors of the three RS SM products into systemic and random errors at each RS pixel. Due to the presence of systemic errors, the temporal mean values and amplitudes of the three RS SM products were calibrated by those of the ATI-based SM. The rescaled anomalies (including amplitude and random error) were merged according to their random errors estimated by the TC method, and then the merged anomalies were added to the temporal mean values of the ATI-based SM to obtain the final merged results. Compared with the merged European Space Agency Climate Change Initiative passive SM product and input SM datasets, the merged results in this article exhibit optimal accuracy. The scheme for merging RS SM products shows high data fusion performance and can be further considered a reliable way to obtain a high-quality merged RS SM dataset.
- Published
- 2021
- Full Text
- View/download PDF
21. Commentary: Characteristics and Long-Term Ablation Outcomes of Supraventricular Arrhythmias in Hypertrophic Cardiomyopathy: A 10-Year, Single-Center Experience
- Author
-
Rui-Huan Shen and Tong Zou
- Subjects
atrial fibrillation ,hypertrophic cardiomyopathy ,random error ,survival analysis ,Kaplan-Meier ,Diseases of the circulatory (Cardiovascular) system ,RC666-701 - Published
- 2022
- Full Text
- View/download PDF
22. ARRAY RADIATION PATTERN RECOVERY UNDER RANDOM ERRORS USING CLUSTERED LINEAR ARRAY.
- Author
-
Abdulqader, Ahmed J., Thaher, Raad H., and Mohammed, Jafar R.
- Subjects
COST functions ,RADIATION - Abstract
In practice, random errors in the excitations (amplitude and phase) of array elements cause undesired variations in the array patterns. In this paper, the clustered array elements with tapered amplitude excitations technique are introduced to reduce the impact of random weight errors and recover the desired patterns. The most beneficial feature of the suggested method is that it can be used in the design stage to count for any amplitude errors instantly. The cost function of the optimizer used is restricted to avoid any unwanted rises in sidelobe levels caused by unexpected perturbation errors. Furthermore, errors on element amplitude excitations are assumed to occur either randomly or sectionally (i.e., an error affecting only a subset of the array elements) through the entire array aperture. The validity of the proposed approach is entirely supported by simulation studies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Concurrent Validity of Power From Three On-Water Rowing Instrumentation Systems and a Concept2 Ergometer.
- Author
-
Holt, Ana C., Hopkins, William G., Aughey, Robert J., Siegel, Rodney, Rouillard, Vincent, and Ball, Kevin
- Subjects
EXCLUSIVE & concurrent legislative powers ,TEST validity ,STATISTICAL accuracy ,DYNAMOMETER ,ROWING training ,EXERCISE tests ,ERGOMETRY - Abstract
Purpose: Instrumentation systems are increasingly used in rowing to measure training intensity and performance but have not been validated for measures of power. In this study, the concurrent validity of Peach PowerLine (six units), Nielsen-Kellerman EmPower (five units), Weba OarPowerMeter (three units), Concept2 model D ergometer (one unit), and a custom-built reference instrumentation system (Reference System; one unit) were investigated. Methods: Eight female and seven male rowers [age, 21 ± 2.5 years; rowing experience, 7.1 ± 2.6 years, mean ± standard deviation (SD)] performed a 30-s maximal test and a 7 × 4-min incremental test once per week for 5 weeks. Power per stroke was extracted concurrently from the Reference System (via chain force and velocity), the Concept2 itself, Weba (oar shaft-based), and either Peach or EmPower (oarlock-based). Differences from the Reference System in the mean (representing potential error) and the stroke-to-stroke variability (represented by its SD) of power per stroke for each stage and device, and between-unit differences, were estimated using general linear mixed modeling and interpreted using rejection of non-substantial and substantial hypotheses. Results: Potential error in mean power was decisively substantial for all devices (Concept2, –11 to –15%; Peach, −7.9 to −17%; EmPower, −32 to −48%; and Weba, −7.9 to −16%). Between-unit differences (as SD) in mean power lacked statistical precision but were substantial and consistent across stages (Peach, ∼5%; EmPower, ∼7%; and Weba, ∼2%). Most differences from the Reference System in stroke-to-stroke variability of power were possibly or likely trivial or small for Peach (−3.0 to −16%), and likely or decisively substantial for EmPower (9.7–57%), and mostly decisively substantial for Weba (61–139%) and the Concept2 (−28 to 177%). Conclusion: Potential negative error in mean power was evident for all devices and units, particularly EmPower. Stroke-to-stroke variation in power showed a lack of measurement sensitivity (apparent smoothing) that was minor for Peach but larger for the Concept2, whereas EmPower and Weba added random error. Peach is therefore recommended for measurement of mean and stroke power. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. A Frequency Domain Error Model for Local 2D-DIC.
- Author
-
Wang, Z., Kang, K., Wang, Z, Wu, H., Wang, S., Li, L., and Li, C.
- Subjects
- *
DIGITAL image correlation , *FOURIER analysis , *DEFORMATION of surfaces , *MEASUREMENT errors - Abstract
Background: Digital image correlation (DIC) has become a superior tool for surface deformation measurement so far. In order to improve its measurement accuracy, different error models have been developed to reveal the cause of various error. Objective: An error model unifying the random error and systematic error should be valuable. Method: In this paper, a frequency-domain error model for 2D-DIC is established based on Fourier analysis. Using the model, the error caused by random noise, subpixel interpolation and the combination of the two can be analyzed and predicted. Results: Numerical experiments were conducted to verify the effectiveness of the proposed models. The results from theoretical models and numerical experiments are in a good agreement. Conclusions: It is demonstrated that the proposed model is a generalization of several previously established models. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Error Propagation.
- Author
-
Singh, Arvind and Chaturvedi, Priyanka
- Subjects
NONLINEAR functions ,TRANSCENDENTAL functions ,ERROR functions ,DEPENDENT variables ,DATA entry ,INDEPENDENT variables - Abstract
Error is not a mistake in science. A mistake can be due to an incorrect entry of data in an experiment or an incorrect calculation. Error refers to the precision (uncertainty) of measurements, which one obtains by estimating the standard deviation of repetitive measurements of a certain parameter. We often use "the method of error propagation" to determine uncertainty (error) in a dependent variable from the measured uncertainty in the independent variables. Here, we discuss the origin of the error propagation equation and the assumptions considered to derive it. Intuitional notion of error propagation in statistics suggests that random relative error in the dependent variable cannot be less than the sum of those in the independent variable(s). In this article, we explain that some transcendental functions (such as trigonometric and log functions), however, do not follow this notion of error propagation because their first partial derivatives are usually small in magnitude and sometimes vanish completely at certain points. We further explain and discuss the behaviour of such a function. We have made suggestions for estimating errors in such non-linear functions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. Random error identification for MEMS gyro in coal mine underground
- Author
-
CONG Lin, WANG Xiaolong, and YAN Bi
- Subjects
borehole clinometer ,mems gyro ,random error ,error identification ,allan variance ,Mining engineering. Metallurgy ,TN1-997 - Abstract
Aiming at problems of unrevealing potential error source, hardly separating specific random error and long data collection time of common random error identification methods, Allan variance analysis method was used to identify random error of MEMS gyro in coal mine underground. Principle of Allan variance analysis method was introduced. Allan variance analysis method was used to process measured data of MEMS gyro, Allan standard deviation curves were given, and main random error coefficients of MEMS gyro were obtained by least square fitting. The experiment results verify validity of the Allan variance analysis method for random error identification of MEMS gyro.
- Published
- 2019
- Full Text
- View/download PDF
27. Effect of systematic and random flow measurement errors on history matching: a case study on oil and wet gas reservoirs
- Author
-
Mahdi Sadri, Seyed M. Shariatipour, Andrew Hunt, and Masoud Ahmadinia
- Subjects
Flow measurement ,History matching ,Systematic error ,Random error ,Wet gas reservoir ,Petroleum refining. Petroleum products ,TP690-692.5 ,Petrology ,QE420-499 - Abstract
Abstract History matching is the process of modifying a numerical model (representing a reservoir) in the light of observed production data. In the oil and gas industry, production data are employed during a history matching exercise in order to reduce the uncertainty in associated reservoir models. However, production data, normally measured using commercial flowmeters that may or may not be accurate depending on factors such as maintenance schedules, or estimated using mathematical equations, inevitably has inherent errors. In other words, the data which are used to reduce the uncertainty of the model may have considerable uncertainty in itself. This problem is exacerbated for gas condensate and wet gas reservoirs as there are even greater errors associated with measuring small fractions of liquid. The influence of this uncertainty in the production data on history matching has not been addressed in the literature so far. In this paper, the effect of systematic and random flow measurement errors on history matching is investigated. Initially, 14 production data sets with different ranges of systematic and random errors, from 0 to 10%, have been employed in a history matching exercise for an oil reservoir and the results have later been evaluated based on a reference model. Subsequently, 23 data sets with errors ranging from 0 to 20% have been employed in the same process for a wet gas reservoir. The results show that for both cases systematic errors considerably affect history matching, while the effect of random errors on the considered scenarios is seen to be insignificant. Although reservoir model parameters in the wet gas reservoir were not as sensitive to the flow measurement errors as in the oil reservoir, for both cases, the future production forecast was significantly affected by the errors. Permeability was seen to be the most sensitive history matching parameter to the flow measurement errors in the oil reservoir, while for the wet gas reservoir, the most sensitive parameter was the forecast of future oil and gas production. Finally, considering the noticeable effect of systematic errors on both cases, it is suggested that flowmeter calibration and regular maintenance is prioritised, although the subsequent economic cost needs to be considered.
- Published
- 2019
- Full Text
- View/download PDF
28. Random error units, extension of a novel method to express random error in epidemiological studies
- Author
-
Janszky I, Bjørngaard JH, Romundstad P, Vatten L, and Orsini N
- Subjects
random error ,confidence intervals ,p value ,Infectious and parasitic diseases ,RC109-216 - Abstract
Imre Janszky,1 Johan Håkon Bjørngaard,1 Pål Romundstad,1 Lars Vatten,1 Nicola Orsini2 1Deparment of Public Health, Faculty of Medicine and Health, Norwegian University of Science and Technology, Trondheim, Norway; 2Department of Public Health Sciences, Karolinska Insitutet, Stockholm, Sweden Abstract: Currently used methods to express random error are often misinterpreted and consequently misused by biomedical researchers. Previously we proposed a simple approach to quantify the amount of random error in epidemiological studies using OR for binary exposures. Expressing random error with the number of random error units (REU) does not require solid background in statistics for a proper interpretation and cannot be misused for making oversimplistic interpretations relying on statistical significance. We now expand the use of REU to the most common measures of associations in epidemiology and to continuous variables, and we have developed a Stata program, which greatly facilitates the calculation of REU. Keywords: statistical significance, confidence intervals, P Value, random error, random error units
- Published
- 2019
29. Random Error Characterization of Nonsmooth Parabolic Reflector Antennas With Gore-Faceted or Discontinuous Surface.
- Author
-
Zhang, Shuxin and Duan, Baoyan
- Subjects
- *
SATELLITE dish antennas , *REFLECTOR antennas , *GAUSSIAN distribution , *TAYLOR'S series , *DISTRIBUTION (Probability theory) - Abstract
A generalized random error characterization method is proposed mainly to evaluate the effects of random error on radiation pattern for nonsmooth parabolic reflector antennas, which are synthesized by the gore-faceted or discontinuous surface. Based on the previous work of the second-order Taylor series expansion of the radiation integral, two matrix-form formulas of the generalized method are derived. With the assumed Gaussian distribution of the random surface error, the average power pattern can be easily calculated by the generalized method. The proposed method can overcome the drawbacks of the previous methods for nonsmooth parabolic reflector antennas and its computational accuracy can be guaranteed. The wide applicability of the generalized method is sequentially demonstrated by a smooth reflector antenna, a 0.5 m Ka-band gore-faceted rib reflector, and a 1 m Ka-band symmetric stepped reflector. Through the simulations on different types of nonsmooth parabolic reflector antennas, the versatility of the proposed method can be easily illustrated. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Eliciting Human Judgment for Prediction Algorithms.
- Author
-
Ibrahim, Rouba, Kim, Song-Hee, and Tong, Jordan
- Subjects
HUMAN error ,FORECASTING ,ALGORITHMS ,OPERATIONS management ,HUMAN beings - Abstract
Even when human point forecasts are less accurate than data-based algorithm predictions, they can still help boost performance by being used as algorithm inputs. Assuming one uses human judgment indirectly in this manner, we propose changing the elicitation question from the traditional direct forecast (DF) to what we call the private information adjustment (PIA): how much the human thinks the algorithm should adjust its forecast to account for information the human has that is unused by the algorithm. Using stylized models with and without random error, we theoretically prove that human random error makes eliciting the PIA lead to more accurate predictions than eliciting the DF. However, this DF-PIA gap does not exist for perfectly consistent forecasters. The DF-PIA gap is increasing in the random error that people make while incorporating public information (data that the algorithm uses) but is decreasing in the random error that people make while incorporating private information (data that only the human can use). In controlled experiments with students and Amazon Mechanical Turk workers, we find support for these hypotheses. This paper was accepted by Charles Corbett, operations management. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. Błędy nielosowe i ich znaczenie w testowaniu hipotez.
- Author
-
Szreder, Mirosław
- Abstract
Copyright of Polish Statistician / Wiadomości Statystyczne is the property of State Treasury - Statistics Poland and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
32. Empirical Distribution of Conditional Errors in Radar Rainfall Products.
- Author
-
Ciach, Grzegorz J. and Gebremichael, Mekonnen
- Subjects
- *
RAIN gauges , *RAINFALL measurement , *RAINFALL , *PROBABILITY density function , *RADAR meteorology , *RADAR - Abstract
Efficient quantification of conditional errors in radar rainfall (RR) products requires a realistic model of random error distribution. Nonparametric estimate of the probability density function (pdf) of standardized RR errors is obtained using a large data sample. The standardization is based on a second‐order separation of systematic and random effects. The estimated empirical distribution is skewed and has exponential shapes of its tails with two different scales. It cannot be modeled with the Gaussian law that was used in several previous studies. A good representation of the tails of the empirical distribution of RR errors is obtained with a three‐parameter modified Laplace model with a shift and unequal slopes of its two sides. Plain Language Summary: Reliable measurement of rainfall is important for hydrology, water management, agriculture, and other users. It is a challenging task because rainfall is highly variable and networks of rain gauges are too sparse to capture this variability. Thus, rainfall estimates based on weather radars are becoming indispensable because they provide continuous coverage of the area up to about 100 km around the radars. However, they are ridden by many uncertainties that must be known in order to avoid dangerous mistakes in all applications of the estimates. In this paper, we contribute an important new piece to this knowledge—the empirically based information about the frequency of occurrence of different magnitudes of the errors that is called the error probability distribution. It allows to evaluate their impact on different applications of radar rainfall. It also enables more reliable comparisons of different rainfall estimation methods in order to monitor progress in this area. The new information was obtained by applying a new mathematical modeling method to a large sample of radar rainfall estimates combined with rain gauge measurements of true rainfall. Many previous applications assumed that radar rainfall errors were distributed according to the commonly used bell curve called normal distribution. Our results show that the actual error distribution differs considerably from normality and that using the normal model leads to misrepresentation of the uncertainties. We now have the strongly supported mathematical tool to treat them in a much more realistic way. Key Points: Empirical distribution of conditional random errors in radar rainfall is estimated using second‐order separation of systematic and random effectsThe error distribution cannot be described with Gaussian model used beforeAsymmetric shifted Laplace model fits well the tails of the empirical distribution [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. Generation of strongly non-Gaussian stochastic processes by iterative scheme upgrading phase and amplitude contents.
- Author
-
Wu, Yongxin, Zhang, Houle, and Gao, Yufeng
- Subjects
- *
STOCHASTIC processes , *MONTE Carlo method , *CUMULATIVE distribution function , *WIND speed , *MARGINAL distributions , *GAUSSIAN distribution - Abstract
• An efficient method to simulate stationary strongly non-Gaussian process is proposed. • The proposed method performs well in some cases in terms of the convergence speed and accuracy. • The importance of upgrading both amplitude and phase contents is demonstrated. Random excitations, such as wind velocity, always exhibit non-Gaussian features. Sample realisations of stochastic processes satisfying given features should be generated, in order to perform the dynamical analysis of structures under stochastic loads based on the Monte Carlo simulation. In this paper, an efficient method is proposed to generate stationary non-Gaussian stochastic processes. It involves an iterative scheme that produces a class of sample processes satisfying the following conditions. (1) The marginal cumulative distribution function of each sample process is perfectly identical to the prescribed one. (2) The ensemble-averaged power spectral density function of these non-Gaussian sample processes is as close to the prescribed target as possible. In this iterative scheme, the underlying processes are generated by means of the spectral representation method that recombines the upgraded power spectral density function with the phase contents of the new non-Gaussian processes in the latest iteration. Numerical examples are provided to demonstrate the capabilities of the proposed approach for four typical non-Gaussian distributions, some of which deviate significantly from the Gaussian distribution. It is found that the estimated power spectral density functions of non-Gaussian processes are close to the target ones, even for the extremely non-Gaussian case. Furthermore, the capability of the proposed method is compared to two other methods. The results show that the proposed method performs well with convergence speed, accuracy, and random errors of power spectral density functions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
34. Admission Control Biases in Hospital Unit Capacity Management: How Occupancy Information Hurdles and Decision Noise Impact Utilization.
- Author
-
Kim, Song-Hee, Tong, Jordan, and Peden, Carol
- Subjects
HOSPITAL utilization ,PATIENT care ,HUMAN behavior models ,HUMAN behavior ,NOISE - Abstract
Providing patients with timely care from the appropriate unit involves both correct clinical evaluation of patient needs and making admission decisions to effectively manage a unit with limited capacity in the face of stochastic patient arrivals and lengths of stay. We study human decision behavior in the latter operations management task. Using behavioral models and controlled experiments in which physicians and MTurk workers manage a simulated hospital unit, we identify cognitive and environmental factors that drive systematic admission decision bias. We report on two main findings. First, seemingly innocuous "occupancy information hurdles" (e.g., having to type a password to view current occupancy) can cause a chain of events that leads physicians to maintain systematically lower unit utilization. Specifically, these hurdles cause physicians to make most admission decisions without checking the current unit occupancy. Then—between the times that they do check—physicians underestimate the number of available beds when occupancy increases from admissions are more salient than occupancy decreases from discharges. Second, decision-related random error or "noise" leads to higher- or lower-than-optimal utilization of hospital units in predictable patterns, depending on the system parameters. We provide evidence that these patterns are due to some settings providing more opportunity for physicians to mistakenly admit patients and other settings that provide more opportunity to mistakenly reject patients. These findings help identify when and why clinicians are likely to make inefficient decisions because of human cognitive limitations and suggest mitigation strategies to help hospital units improve their capacity management. This paper was accepted by Charles Corbett, operations management. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
35. Determination of an optimal treatment margin for intracranial tumours treated with radiotherapy at Groote Schuur Hospital
- Author
-
Andre Vos, Thuran Naiker, and Hannelie MacGregor
- Subjects
setup margin ,random error ,systematic error ,intracranial tumours ,ctv-ptv expansion margin ,Neoplasms. Tumors. Oncology. Including cancer and carcinogens ,RC254-282 - Abstract
Background: Accurate delivery of radiotherapy is a paramount component of providing safe oncological care. Margins are applied when planning radiotherapy to account for subclinical tumour spread, physiological movement and setup error. Setup error is unique to each radiotherapy institution and should be calculated for each organ site to ensure safe delivery of treatment. Aim: The aim of this study is to calculate the random and systematic setup error for a cohort of patients with intracranial tumours treated with 3D Conformal Radiotherapy. Setting: The Department of Radiation Oncology, Groote Schuur Hospital, South Africa. Method: After obtaining above mentioned data, the ideal Clinical Target Volume (CTV)-Planning Target Volume (PTV) expansion margin was calculated using published CTV-PTV expansion margin recipes. The electronic portal images of 20 patients who met the inclusion criteria were compared to their digitally reconstructed radiograph. The setup error for each patient was measured after which the random (σ) and systematic (Σ) setup error for the study group could be calculated. With both these values known, the CTV-PTV expansion margin could be determined. Results: The largest error was in the superior/inferior direction (87.7% 3mm; 6.1% 5mm), followed by the medial/lateral direction (76.2% 3 mm; 0 5 mm) and least in the anterior/posterior direction (91.6% 3 mm; 0 5 mm). The random and systematic errors in all three directions for this patient cohort were less than 2 mm, conforming to acceptable standards of delivering safe radiotherapy. Using Stroom’s margin recipe (2Σ + 0.7σ) a CTV-PTV expansion margin of 5 mm can safely be applied for this patient cohort. Conclusion: When treating patients with intracranial tumours at Groote Schuur Hospital the CTV-PTV expansion margin can safely be reduced from 1 cm to 5 mm.
- Published
- 2020
- Full Text
- View/download PDF
36. Study to compare the effect of different registration methods on patient setup uncertainties in cone-beam computed tomography during volumetric modulated arc therapy for breast cancer patients
- Author
-
P Mohandass, D Khanna, T Manoj Kumar, T Thiyagaraj, C Saravanan, Narendra Kumar Bhalla, and Abhishek Puri
- Subjects
Breast cancer ,clip-box ,cone-beam computed tomography ,mask ,random error ,systematic error ,volumetric modulated arc therapy ,Medical physics. Medical radiology. Nuclear medicine ,R895-920 - Abstract
Purpose: This study compared three different methods used in registering cone-beam computed tomography (CBCT) image set with planning CT image set for determining patient setup uncertainties during volumetric modulated arc therapy (VMAT) for breast cancer patients. Materials and Methods: Seven breast cancer patients treated with 50 Gy in 25 fractions using VMAT technique were chosen for this study. A total of 105 CBCT scans were acquired by image guidance protocol for patient setup verification. Approved plans' CT images were used as the reference image sets for registration with their corresponding CBCT image sets. Setup errors in mediolateral, craniocaudal, and anteroposterior direction were determined using gray-scale matching between the reference CT images and onboard CBCT images. Patient setup verification was performed using clip-box registration (CBR) method during online imaging. Considering the CBR method as the reference, two more registrations were performed using mask registration (MR) method and dual registration (DR) (CBR + MR) method in the offline mode. For comparison, systematic error (∑), random error (σ), mean displacement vector (R), mean setup error (M), and registration time (Rt) were analyzed. Post hoc Tukey's honest significant difference test was performed for multiple comparisons. Results: Systematic and random errors were less in CBR as compared to MR and DR (P > 0.05). The mean displacement error and mean setup errors were less in CBR as compared to MR and DR (P > 0.05). Increased Rtwas observed in DR as compared to CBR and MR (P < 0.05). In addition, multiple comparisons did not show any significant difference in patient setup error (P > 0.05). Conclusion: For breast VMAT plan delivery, all three registration methods show insignificant variation in patient setup error. One can use any of the three registration methods for patient setup verification.
- Published
- 2018
- Full Text
- View/download PDF
37. Statistical Characterization of PMU Error for Robust WAMS Based Analytics.
- Author
-
Ahmad, Tabia and Senroy, Nilanjan
- Subjects
- *
PHASOR measurement , *GAUSSIAN mixture models , *MEASUREMENT errors , *CURRENT transformers (Instrument transformer) , *NULL hypothesis - Abstract
Synchronized phasor measurement unit (PMU) data contain rich information about power systems, hence the quality of estimates is of utmost concern. The existing methodologies for estimation rely on certain assumptions regarding the error in the measurement data. This paper revisits a key assumption specifically the Gaussian character of the error. A quantification of the PMU error yields its nature and statistical properties including its dependence on various sections of the PMU instrumentation channel (supposedly the major source of error in the PMU data). The non Gaussian nature of the error is asserted using various null hypotheses tests and a novel Gaussian mixture model based clustering technique is proposed to characterize and relate the errors present in PMU measurement data to the saturation in current transformer, cable length and the PMU burden. The proposed approach is tested using both real and synthetic PMU datasets. The ultimate goal of the paper is towards creating a PMU error emulator for testing and research of data analytic algorithms focused on crucial WAMS based applications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Determination of an optimal treatment margin for intracranial tumours treated with radiotherapy at Groote Schuur Hospital.
- Author
-
Vos, Andre, Naiker, Thuran, and MacGregor, Hannelie
- Subjects
- *
RADIOTHERAPY , *TUMORS , *INTRACRANIAL tumors , *CANCER treatment , *COHORT analysis - Abstract
Background: Accurate delivery of radiotherapy is a paramount component of providing safe oncological care. Margins are applied when planning radiotherapy to account for subclinical tumour spread, physiological movement and setup error. Setup error is unique to each radiotherapy institution and should be calculated for each organ site to ensure safe delivery of treatment. Aim: The aim of this study is to calculate the random and systematic setup error for a cohort of patients with intracranial tumours treated with 3D Conformal Radiotherapy. Setting: The Department of Radiation Oncology, Groote Schuur Hospital, South Africa. Method: After obtaining above mentioned data, the ideal Clinical Target Volume (CTV)- Planning Target Volume (PTV) expansion margin was calculated using published CTV-PTV expansion margin recipes. The electronic portal images of 20 patients who met the inclusion criteria were compared to their digitally reconstructed radiograph. The setup error for each patient was measured after which the random (σ) and systematic (Σ) setup error for the study group could be calculated. With both these values known, the CTV-PTV expansion margin could be determined. Results: The largest error was in the superior/inferior direction (87.7% < 3mm; 6.1% > 5mm), followed by the medial/lateral direction (76.2% < 3 mm; 0 > 5 mm) and least in the anterior/posterior direction (91.6% < 3 mm; 0 > 5 mm). The random and systematic errors in all three directions for this patient cohort were less than 2 mm, conforming to acceptable standards of delivering safe radiotherapy. Using Stroom's margin recipe (2Σ + 0.7σ) a CTV-PTV expansion margin of 5 mm can safely be applied for this patient cohort. Conclusion: When treating patients with intracranial tumours at Groote Schuur Hospital the CTV-PTV expansion margin can safely be reduced from 1 cm to 5 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Accounting for Uncertainties of the TRMM Satellite Estimates
- Author
-
AghaKouchak, Amir, Nasrollahi, Nasrin, and Habib, Emad
- Subjects
stochastic simulation ,TRMM data ,random error ,rainfall ensemble ,uncertainty analysis ,satellite estimates ,bootstrap technique ,artificial neural-network ,spatial variability ,precipitation estimation ,rainfall variability ,global precipitation ,catchment response ,tropical rainfall ,flood prediction ,Monte-Carlo ,runoff - Published
- 2009
40. Effect of systematic and random flow measurement errors on history matching: a case study on oil and wet gas reservoirs.
- Author
-
Sadri, Mahdi, Shariatipour, Seyed M., Hunt, Andrew, and Ahmadinia, Masoud
- Subjects
GAS reservoirs ,MEASUREMENT errors ,FLOW measurement ,MATCHING theory ,GAS condensate reservoirs ,PETROLEUM reservoirs - Abstract
History matching is the process of modifying a numerical model (representing a reservoir) in the light of observed production data. In the oil and gas industry, production data are employed during a history matching exercise in order to reduce the uncertainty in associated reservoir models. However, production data, normally measured using commercial flowmeters that may or may not be accurate depending on factors such as maintenance schedules, or estimated using mathematical equations, inevitably has inherent errors. In other words, the data which are used to reduce the uncertainty of the model may have considerable uncertainty in itself. This problem is exacerbated for gas condensate and wet gas reservoirs as there are even greater errors associated with measuring small fractions of liquid. The influence of this uncertainty in the production data on history matching has not been addressed in the literature so far. In this paper, the effect of systematic and random flow measurement errors on history matching is investigated. Initially, 14 production data sets with different ranges of systematic and random errors, from 0 to 10%, have been employed in a history matching exercise for an oil reservoir and the results have later been evaluated based on a reference model. Subsequently, 23 data sets with errors ranging from 0 to 20% have been employed in the same process for a wet gas reservoir. The results show that for both cases systematic errors considerably affect history matching, while the effect of random errors on the considered scenarios is seen to be insignificant. Although reservoir model parameters in the wet gas reservoir were not as sensitive to the flow measurement errors as in the oil reservoir, for both cases, the future production forecast was significantly affected by the errors. Permeability was seen to be the most sensitive history matching parameter to the flow measurement errors in the oil reservoir, while for the wet gas reservoir, the most sensitive parameter was the forecast of future oil and gas production. Finally, considering the noticeable effect of systematic errors on both cases, it is suggested that flowmeter calibration and regular maintenance is prioritised, although the subsequent economic cost needs to be considered. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Measurement Uncertainty of Precise Interpolating Time Counters.
- Author
-
Szplet, Ryszard, Szymanowski, Rafal, and Sondej, Dominik
- Subjects
- *
UNCERTAINTY , *TIME-digital conversion , *TIME , *MEASUREMENT , *GATE array circuits , *INTERPOLATION - Abstract
This paper presents a comprehensive analysis of measurement uncertainty of a wide class of precise time interval counters based on the most common two-stage time interpolation method. All nonnegligible sources of errors are discussed, including the quantization process, nonlinearity of time interpolators, timing jitter induced by elements of signal paths as well as inherent jitter of input pulses, and reference clock signal. A formula of the total measurement uncertainty and several design suggestions aiming at improving the time counter accuracy and precision is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Performance Improvement of Spaceborne Carbon Dioxide Detection IPDA LIDAR Using Linearty Optimized Amplifier of Photo-Detector
- Author
-
Yadan Zhu, Juxin Yang, Xiaoxi Zhang, Jiqiao Liu, Xiaopeng Zhu, Huaguo Zang, Tengteng Xia, Chuncan Fan, Xiao Chen, Yanguang Sun, Xia Hou, and Weibiao Chen
- Subjects
spaceborne IPDA LIDAR ,carbon dioxide ,amplifier circuit optimization ,linearity ,random error ,Science - Abstract
The spaceborne double-pulse integrated-path differential absorption (IPDA) light detection and ranging (LIDAR) system was found to be helpful in observing atmospheric CO2 and understanding the carbon cycle. The airborne experiments of a scale prototype of China’s planned spaceborne IPDA LIDAR was implemented in 2019. A problem with data inversion caused by the detector module nonlinearity was found. Through many experiments, the amplifier circuit board (ACB) of the detector module was proved to be the main factor causing the nonlinearity. Through amplifier circuit optimization, the original bandwidth of the ACB was changed to 1 MHz by using a fifth-order active filter. Compared with the original version, the linearity of optimized ACB is improved from 42.6% to 0.0747%. The optimized ACB was produced and its linearity was verified by experiments. In addition, the output waveform of the optimized ACB changes significantly, which will affect the random error (RE) of the optimized IPDA LIDAR system. Through the performance simulation, the RE of more than 90% of the global area is less than 0.728 ppm. Finally, the transfer model of the detector module was given, which will be helpful for the further optimization of the CO2 column-averaged dry-air mixing ratio (XCO2) inversion algorithm.
- Published
- 2021
- Full Text
- View/download PDF
43. Visual Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis (Milestone Report)
- Author
-
Wendelin, T
- Published
- 2010
- Full Text
- View/download PDF
44. Surprise!
- Author
-
Cole, Stephen R, Edwards, Jessie K, and Greenland, Sander
- Subjects
- *
INFORMATION resources management , *STATISTICS , *DATA analysis - Abstract
Measures of information and surprise, such as the Shannon information value (S value), quantify the signal present in a stream of noisy data. We illustrate the use of such information measures in the context of interpreting P values as compatibility indices. S values help communicate the limited information supplied by conventional statistics and cast a critical light on cutoffs used to judge and construct those statistics. Misinterpretations of statistics may be reduced by interpreting P values and interval estimates using compatibility concepts and S values instead of "significance" and "confidence." [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. Error Decomposition of Remote Sensing Soil Moisture Products Based on the Triple-Collocation Method Introducing an Unbiased Reference Dataset: A Case Study on the Tibetan Plateau
- Author
-
Jian Kang, Rui Jin, Xin Li, and Yang Zhang
- Subjects
error decomposition ,random error ,remote sensing product ,systematic error ,soil moisture ,triple-collocation ,Science - Abstract
Remote sensing (RS) soil moisture (SM) products have been widely used in various environmental studies. Understanding the error structure of data is necessary to properly apply RS SM products in trend and variation analysis and data fusion. However, a spatially continuous assessment of RS SM datasets is impeded by the limited spatial distribution of ground-based observations. As an alternative, the RS apparent thermal inertia (ATI) data related to the SM are transformed into SM values to expand the validation space. To obtain error components, the ATI-based SM along with the Soil Moisture Active Passive Mission (SMAP) and Advanced Microwave Scanning Radiometer 2 (AMSR2) SM are applied with the triple-collocation (TC) method to evaluate the RS SM data regarding random errors and amplitude variances at the regional scale. When the ATI-based SM is regarded as the reference data, the amplitude biases of the other two datasets are determined. The mean bias is also estimated by calculating the mean value difference between the ATI-based and validated RS SM. The results show that the ATI-based SM is a reliable source of reference data that, when combined with the TC method, can correctly estimate the error structure of RS SM datasets in wide space, promoting the reasonable application and calibration of RS SM datasets.
- Published
- 2020
- Full Text
- View/download PDF
46. A General Point-Based Method for Self-Calibration of Terrestrial Laser Scanners Considering Stochastic Information
- Author
-
Tengfei Zhou, Xiaojun Cheng, Peng Lin, Zhenlun Wu, and Ensheng Liu
- Subjects
self-calibration ,Gauss–Helmert model ,random error ,Gauss–Newton method ,variance component estimation ,Science - Abstract
Due to the existence of environmental or human factors, and because of the instrument itself, there are many uncertainties in point clouds, which directly affect the data quality and the accuracy of subsequent processing, such as point cloud segmentation, 3D modeling, etc. In this paper, to address this problem, stochastic information of point cloud coordinates is taken into account, and on the basis of the scanner observation principle within the Gauss–Helmert model, a novel general point-based self-calibration method is developed for terrestrial laser scanners, incorporating both five additional parameters and six exterior orientation parameters. For cases where the instrument accuracy is different from the nominal ones, the variance component estimation algorithm is implemented for reweighting the outliers after the residual errors of observations obtained. Considering that the proposed method essentially is a nonlinear model, the Gauss–Newton iteration method is applied to derive the solutions of additional parameters and exterior orientation parameters. We conducted experiments using simulated and real data and compared them with those two existing methods. The experimental results showed that the proposed method could improve the point accuracy from 10−4 to 10−8 (a priori known) and 10−7 (a priori unknown), and reduced the correlation among the parameters (approximately 60% of volume). However, it is undeniable that some correlations increased instead, which is the limitation of the general method.
- Published
- 2020
- Full Text
- View/download PDF
47. PENGGUNAAN SIX SIGMA PADA PEMERIKSAAN JUMLAH LEUKOSIT DI RSUD PANEMBAHAN SENOPATI BANTUL
- Author
-
Hieronymus Rayi Prasetya, Nurlaili Farida Muhajir, and Magdalena Putri Iriyanti Dumatubun
- Subjects
Laboratory examination ,Hematology analyzer ,Random error ,Statistics ,Six Sigma ,Normal level ,Internal quality ,Mathematics - Abstract
Internal quality assurance is a prevention and control activity that must be carried out by the laboratory continuously and covers all aspects of laboratory examination parameters. Hematology examination in the laboratory is carried out using a Hematology analyzer, but this tool has limitations, one of which is that it can make leukocyte count reading errors. In order for the results of the tool to be reliable, it is necessary to carry out quality control on the hematology analyzer. The use of Westgard multirule is commonly used in laboratories, but the application of six sigma is still very rarely used, especially in the field of hematology. This research aims to know the internal quality control of the analytical stage of the Hematology analyzer for the leukocyte count based on the analysis of Westgard and Six sigma. This type of research is descriptive research. The sample in this study is the control value data for the examination of the leukocyte count for 1 month at Panembahan Senopati Hospital. The data were analyzed using the Westgard rules and Six sigma. At low level, 13s deviation (random error) is obtained. At the normal level, there is a deviation of 12s (warning). At high level 12s deviation is obtained (warning). The sigma scale at all control levels shows a scale above 6. Analysis based on six sigma for leukocyte count showed an average of 7.16 sigma which indicates that leukocyte examination using a hematology analyzer has an accuracy of 99.9%.
- Published
- 2021
48. Complex Cytochrome P450 Kinetics Due to Multisubstrate Binding and Sequential Metabolism. Part 1. Theoretical Considerations
- Author
-
Erickson M. Paragas, Swati Nagar, Zeyuan Wang, and Ken Korzekwa
- Subjects
Pharmacology ,Binding Sites ,Metabolic Clearance Rate ,Numerical analysis ,Kinetics ,Ode ,Pharmaceutical Science ,Sigmoid function ,Models, Biological ,Biophysical Phenomena ,Mixed Function Oxygenases ,Substrate Specificity ,Modeling and simulation ,Cytochrome P-450 Enzyme System ,Ordinary differential equation ,Random error ,Cytochrome P-450 Enzyme Inhibitors ,Humans ,Enzyme kinetics ,Biological system ,Mathematics - Abstract
Complexities in P450-mediated metabolism kinetics include multisubstrate binding, multiple-product formation, and sequential metabolism. Saturation curves and intrinsic clearances were simulated for single-substrate and multisubstrate models using derived velocity equations and numerical solutions of ordinary differential equations (ODEs). Multisubstrate models focused on sigmoidal kinetics because of their dramatic impact on clearance predictions. These models were combined with multiple-product formation and sequential metabolism, and simulations were performed with random error. Use of single-substrate models to characterize multisubstrate data can result in inaccurate kinetic parameters and poor clearance predictions. Comparing results for use of standard velocity equations with ODEs clearly shows that ODEs are more versatile and provide better parameter estimates. It would be difficult to derive concentration-velocity relationships for complex models, but these relationships can be easily modeled using numerical methods and ODEs. SIGNIFICANCE STATEMENT: The impact of multisubstrate binding, multiple-product formation, and sequential metabolism on the P450 kinetics was investigated. Numerical methods are capable of characterizing complicated P450 kinetics.
- Published
- 2021
49. Consistency properties for the wavelet estimator in nonparametric regression model with dependent errors
- Author
-
Mingming Chen and Qihui He
- Subjects
Mean consistency ,Complete consistency ,Applied Mathematics ,m-extended negative dependence ,Wavelet estimator ,Nonparametric regression ,Rate of consistency ,Consistency (statistics) ,Nonparametric regression model ,Random error ,Statistics ,QA1-939 ,Discrete Mathematics and Combinatorics ,Analysis ,Mathematics - Abstract
In this paper, we establish the pth mean consistency, complete consistency, and the rate of complete consistency for the wavelet estimator in a nonparametric regression model with m-extended negatively dependent random errors. We show that the best rates can be nearly $O(n^{-1/3})$ O ( n − 1 / 3 ) under some general conditions. The results obtained in the paper markedly improve and extend some corresponding ones to a much more general setting.
- Published
- 2021
50. Ridge estimation iterative solution of ill-posed mixed additive and multiplicative random error model with equality constraints
- Author
-
Tao Chen and Leyang Wang
- Subjects
Well-posed problem ,Estimation ,QB275-343 ,Estimation theory ,QC801-809 ,Multiplicative function ,Equality constraints ,Geophysics. Cosmic physics ,U-curve method ,Ridge (differential geometry) ,Weighted least squares ,Ridge estimation method ,Geophysics ,Random error ,Ill-posed problem ,Applied mathematics ,Computers in Earth Sciences ,Coefficient matrix ,Prior information ,Mixed additive and multiplicative random error model ,Geodesy ,Earth-Surface Processes ,Mathematics - Abstract
The reasonable prior information between the parameters in the adjustment processing can significantly improve the precision of the parameter solution. Based on the principle of equality constraints, we establish the mixed additive and multiplicative random error model with equality constraints and derive the weighted least squares iterative solution of the model. In addition, aiming at the ill-posed problem of the coefficient matrix, we also propose the ridge estimation iterative solution of ill-posed mixed additive and multiplicative random error model with equality constraints based on the principle of ridge estimation method and derive the U-curve method to determine the ridge parameter. The experimental results show that the weighted least squares iterative solution can obtain more reasonable parameter estimation and precision information than existing solutions, verifying the feasibility of applying the equality constraints to the mixed additive and multiplicative random error model. Furthermore, the ridge estimation iterative solution can obtain more accurate parameter estimation and precision information than the weighted least squares iterative solution.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.