6,686 results on '"sampling errors"'
Search Results
2. A modification of the periodic nonuniform sampling involving derivatives with a Gaussian multiplier.
- Author
-
Asharabi, Rashad M. and Khirallah, Mustafa Q.
- Subjects
- *
IRREGULAR sampling (Signal processing) , *APPROXIMATION theory , *INTEGRAL functions , *EXPONENTIAL functions , *SAMPLING errors , *ANALYTIC functions - Abstract
The periodic nonuniform sampling series, involving periodic samples of both the function and its first r derivatives, was initially introduced by Nathan (Inform Control 22: 172–182, 1973). Since then, various authors have extended this sampling series in different contexts over the past decades. However, the application of the periodic nonuniform derivative sampling series in approximation theory has been limited due to its slow convergence. In this article, we introduce a modification to the periodic nonuniform sampling involving derivatives by incorporating a Gaussian multiplier. This modification results in a significantly improved convergence rate, which now follows an exponential order. This is a significant improvement compared to the original series, which had a convergence rate of O (N - 1 / p) where p > 1 . The introduced modification relies on a complex-analytic technique and is applicable to a wide range of functions. Specifically, it is suitable for the class of entire functions of exponential type that satisfy a decay condition, as well as for the class of analytic functions defined on a horizontal strip. To validate the presented theoretical analysis, the paper includes rigorous numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Online correction method for phase current gain errors in permanent magnet synchronous motor sensorless control.
- Author
-
Wu, Chun, Sha, Weimin, and Zhu, Chunqiao
- Subjects
- *
PERMANENT magnet motors , *SAMPLING errors , *SLIDING mode control , *PROBLEM solving , *MATHEMATICAL models - Abstract
An online correction method for phase current sampling gain errors utilizing the phase current zero-crossing principle is proposed in this paper to solve the problems of torque/speed pulsation and inaccurate position estimation in permanent magnet synchronous motor (PMSM) drives. First, a mathematical model considering three-phase current sampling errors was built and a variable-gain sliding mode observer (SMO) was designed. Second, the impacts of current gain errors on electromagnetic torque and position estimation were analyzed. When any one of the three-phase currents crosses zero, the amplitudes of the other two-phase currents are the same but the signs are opposite. Based on this principle, the current gain ratios of these two-phase currents can be calculated. Finally, comparing three sets of current gain ratios, the faulty phase can be determined, and online correction of the current gain error is executed. Experimental results show that the proposed method can determine faults regardless of whether there is a large or small gain error. In addition, the proposed method can balance the three-phase currents, improve the accuracy of position estimation, reduce torque pulsation, and enhance the speed control performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Nonparametric estimation of P(X<Y) from noisy data samples with non-standard error distributions.
- Author
-
Phuong, Cao Xuan and Thuy, Le Thi Hong
- Subjects
- *
RANDOM variables , *LEBESGUE measure , *NONPARAMETRIC estimation , *MEASUREMENT errors , *SAMPLING errors - Abstract
Let X, Y be continuous random variables with unknown distributions. The aim of this paper is to study the problem of estimating the probability θ : = P (X < Y) based on independent random samples from the distributions of X ′ , Y ′ , ζ and η , where X ′ = X + ζ , Y ′ = Y + η and X, Y, ζ , η are mutually independent random variables. In this context, ζ , η are referred to as measurement errors. We apply the ridge-parameter regularization method to derive a nonparametric estimator for θ depending on two parameters. Our estimator is shown to be consistent with respect to mean squared error if the characteristic functions of ζ , η only vanish on Lebesgue measure zero sets. Under some further assumptions on the densities of X, Y, ζ and η , we obtain some upper and lower bounds on the convergence rate of the estimator. A numerical example is also given to illustrate the efficiency of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Accelerated 3D multi‐channel B1+ mapping at 7 T for the brain and heart.
- Author
-
Kent, James L., de Buck, Matthijs H. S., Dragonu, Iulius, Chiew, Mark, Valkovič, Ladislav, and Hess, Aaron T.
- Subjects
BODY image ,BRAIN mapping ,SAMPLING errors ,MAPS ,MAGNETIC resonance imaging - Abstract
Purpose: To acquire accurate volumetric multi‐channel B1+$$ {\mathrm{B}}_1^{+} $$ maps in under 14 s whole‐brain or 23 heartbeats whole‐heart for parallel transmit (pTx) applications at 7 T. Theory and Methods: We evaluate the combination of three recently proposed techniques. The acquisition of multi‐channel transmit array B1+$$ {\mathrm{B}}_1^{+} $$ maps is accelerated using transmit low rank (TxLR) with absolute B1+$$ {\mathrm{B}}_1^{+} $$ mapping (Sandwich) acquired in a B1+$$ {\mathrm{B}}_1^{+} $$ time‐interleaved acquisition of modes (B1TIAMO) fashion. Simulations using synthetic body images derived from Sim4Life were used to test the achievable acceleration for small scan matrices of 24 × 24. Next, we evaluated the method by retrospectively undersampling a fully sampled B1+$$ {\mathrm{B}}_1^{+} $$ library of nine subjects in the brain. Finally, Cartesian undersampled phantom and in vivo images were acquired in both the brain of three subjects (8Tx/32 receive [Rx]) and the heart of another three subjects (8Tx/8Rx) at 7 T. Results: Simulation and in vivo results show that volumetric multi‐channel B1+$$ {\mathrm{B}}_1^{+} $$ maps can be acquired using acceleration factors of 4 in the body, reducing the acquisition time to within 23 heartbeats, which was previously not possible. In silico heart simulations demonstrated a RMS error to the fully sampled native resolution ground truth of 4.2° when combined in first‐order circularly polarized mode (mean flip angle 66°) at an acceleration factor of 4. The 14 s 3D B1+$$ {\mathrm{B}}_1^{+} $$ maps acquired in the brain have a RMS error of 1.9° to the fully sampled (mean flip angle 86°). Conclusion: The proposed method is demonstrated as a fast pTx calibration technique in the brain and a promising method for pTx calibration in the body. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Clustering Functional Data With Measurement Errors: A Simulation‐Based Approach.
- Author
-
Zhu, Tingyu, Xue, Lan, Tekwe, Carmen, Diaz, Keith, Benden, Mark, and Zoh, Roger
- Subjects
- *
MEASUREMENT errors , *DATA structures , *SAMPLING errors , *CHILDHOOD obesity , *FUNCTIONAL analysis - Abstract
ABSTRACT Clustering analysis of functional data, which comprises observations that evolve continuously over time or space, has gained increasing attention across various scientific disciplines. Practical applications often involve functional data that are contaminated with measurement errors arising from imprecise instruments, sampling errors, or other sources. These errors can significantly distort the inherent data structure, resulting in erroneous clustering outcomes. In this article, we propose a simulation‐based approach designed to mitigate the impact of measurement errors. Our proposed method estimates the distribution of functional measurement errors through repeated measurements. Subsequently, the clustering algorithm is applied to simulated data generated from the conditional distribution of the unobserved true functional data given the observed contaminated functional data, accounting for the adjustments made to rectify measurement errors. We illustrate through simulations show that the proposed method has improved numerical performance than the naive methods that neglect such errors. Our proposed method was applied to a childhood obesity study, giving more reliable clustering results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Can clinical guidelines reduce variation in transfusion practice? A pre–post study of blood transfusions during cardiac surgery.
- Author
-
Irving, Adam, Harris, Anthony, Petrie, Dennis, Avdic, Daniel, Smith, Julian, Tran, Lavinia, Reid, Christopher M., and McQuilten, Zoe K.
- Subjects
- *
CARDIAC surgery , *ERYTHROCYTES , *PUBLIC hospitals , *RED blood cell transfusion , *SAMPLING errors , *ELECTIVE surgery - Abstract
Background and Objectives Materials and Methods Results Conclusion Previously published studies have consistently identified significant variation in red blood cell (RBC) transfusions during cardiac surgery. Clinical guidelines can be effective at improving the average quality of care; however, their impact on variation in practice is rarely studied. Herein, we estimated how variation in RBC use across cardiac surgeons changed after the publication of national patient blood management guidelines.We performed a pre–post study estimating change in variation in RBC transfusions across 80 cardiac surgeons in 29 hospitals using a national cardiac surgery registry. Variation across surgeons was estimated using fixed‐effects regressions controlling for surgery and patient characteristics and an empirical Bayes shrinkage to adjust for sampling error. RBC use was measured by three metrics—the total number of units transfused, the proportion of patients transfused and the number of units transfused, conditional on receiving RBC.The primary analysis utilized 35,761 elective cardiac surgeries performed between March 2009 and February 2015 and identified a 24.5% reduction (p < 0.0001) in mean total units transfused accompanied by a 37.2% reduction (p = 0.040) in the variation across surgeons. The reduction in mean total units was driven by both the proportion of patients transfused and the number of units transfused, conditional on receiving RBC, while the reduction in variation was only driven by the latter.In our study of RBC transfusions across cardiac surgeons, the surgeons who used more RBC in the pre‐guideline period experienced larger reductions in RBC use after the guidelines were published. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Capability of the Mediterranean Argo network to monitor sub-regional climate change indicators.
- Author
-
Chevillard, Carla, Juza, Mélanie, Díaz-Barroso, Lara, Reyes, Emma, Escudier, Romain, and Tintoré, Joaquín
- Subjects
ENTHALPY ,SAMPLING errors ,OCEAN dynamics ,CLIMATE change ,OCEAN - Abstract
The Argo array of profiling floats has considerably increased the observing capability of the three-dimensional global ocean and the knowledge of the ocean response to climate change. In particular, the Argo sampling has allowed observing relevant ocean indicators over the whole Mediterranean Sea especially during the last decade. In this study, the Mediterranean Argo network is comprehensively described from its spatio-temporal coverage to its capability to observe ocean monitoring indicators at sub-regional scale. For this purpose, the Argo array, as a non-interpolated product of profiles, is used to estimate the ocean heat and salt contents integrated within the upper, intermediate and deep layers over the period 2013-2022 in the different sub-regions of the basin. The same computational method is also applied to a model reanalysis product to estimate the impact of sampling of the sole Argo array. The sampling error is defined at sub-regional scale by comparing estimations from the whole model grid (full-sampled model) and from the Argo-like sampled model grid (subsampled model). Warming and salinification trends are well captured by the Argo array over the period of study, warming trends being the highest in the subregions of the western Mediterranean Sea from surface to depth and salinification trends being higher in the eastern sub-basin for the upper layer and in the western sub-basin for the deeper layers. This study also demonstrates the capability of the Argo array to capture local ocean structures and dynamics (e.g. anticyclonic and cyclonic gyres, intermediate and deep convection events and Atlantic Water inflows) and to account for their impact in the sub-regional variability of ocean heat and salt contents in the upper, intermediate and deep layers from seasonal to interannual scales. Considering these structures is fundamental for the understanding of the thermohaline circulation and changes observed in the Mediterranean Sea, and thus for future climate studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A Novel Bootstrapped CMOS Switch with Minimized Sampling and Holding Error Using Sampling Window Error Analysis.
- Author
-
Sharma, Buddhi Prakash, Mysakshi, Chandu, Kumar, Shivam, Gupta, Anu, and Shekhar, Chandra
- Subjects
- *
SAMPLING errors , *CHARGE injection , *COMPLEMENTARY metal oxide semiconductors , *INTERNET of things , *SPEED - Abstract
This study proposes a novel 6-transistor bootstrapped switch with minimized sampling and holding error obtained through sampling window error analysis for SAR ADC design. The proposed switch design strategically mitigates channel charge injection and minimizes the input signal dependency of on-resistance by optimizing its sizing parameters. To counteract channel charge effects, dummy NMOS and PMOS components are judiciously employed, culminating in a substantial improvement in the effective number of bits (ENOB). The complete analysis of the proposed circuit is done using the Cadence Virtuoso SCL 0.18 μ m CMOS process. For a 51.514 kHz sinusoidal 1 V peak-to-peak differential input signal with a 1 MSPs clock speed, the proposed circuit achieves 2.0141 mV maximum sampling window error, 0.131 μ W power consumption, 84.67 dB signal-to-noise ratio (SNR), 84.67 signal-to-noise and distortion (SINAD) ratio and 86.02 dB spurious-free dynamic range (SFDR), which produces 13.77 bits ENOB. For the impacts of process variations and mismatch on switch performance, a comprehensive 500-point Monte Carlo (MC) simulation of the proposed bootstrap switch is conducted in this study. Post-layout results show that the proposed circuit is suitable for IoT applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Estimation of Suspended Matter Concentration in Manwan Aquaculture area Based on GF-1 WFV.
- Author
-
XU Yao-han, YAO Yue, XING Xiao-da, YAN Dong-mei, LIU Shen-dong, TIAN Dong-po, and ZHU Xiao-dan
- Subjects
TOTAL suspended solids ,AQUACULTURE ,STANDARD deviations ,SUSPENDED solids ,REMOTE sensing ,ROOT-mean-squares ,SAMPLING errors - Abstract
Net cage aquaculture has a significant impact on the regional water environment, and studying the changes in water quality parameters in net cage aquaculture areas is of great practical significance for understanding the impact of regional aquaculture on the water environment. This article uses GF-1WFV data to construct an inversion model for the total suspended solids concentration in the net cage aquaculture area, which has grown exponentially in the Manwan Reservoir area in recent years. The results indicate that the model has high accuracy, with an average relative error of 9.65% between the inverted values and the measured values, and a root mean square error of 0.33mg/L. Based on the constructed inversion model and satellite imagery, the total suspended solids concentration in the Manwan net cage aquaculture area was inverted, and the variation patterns of total suspended solids concentration in the reservoir area and different net box positions were analyzed. Research has found that the variation pattern of suspended solids concentration in the Manwan reservoir area and different positions of net cages is basically consistent. The total suspended solids concentration in net cages has not shown any abnormalities due to the phenomenon of local "delineation" in aquaculture, but is lower than the overall suspended solids concentration in the reservoir area. This is mainly because the variation of total suspended solids concentration is mainly affected by precipitation, surface runoff, water flow velocity, etc. Net cage aquaculture can reduce its impact on the total suspended solids concentration in local "delineation" areas. This study has certain reference significance for understanding the changes in total suspended solids concentration in the Manwan reservoir area and net cage aquaculture area. In the future, a multi-source remote sensing data remote sensing estimation model for water quality parameters will be developed for understanding the changing patterns in water environment in the Manwan net cage aquaculture area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. An Adaptive Sliding Mode Control Using a Novel Adaptive Law Based on Quasi-Convex Functions and Average Sliding Variables for Robot Manipulators.
- Author
-
Seo, Dong Hee, Lee, Jin Woong, An, Hyuk Mo, and Lee, Seok Young
- Subjects
SLIDING mode control ,ROBOT control systems ,SAMPLING errors ,ROBOTS ,MANIPULATORS (Machinery) - Abstract
This paper proposes a novel adaptive law that uses a quasi-convex function and a novel sliding variable in an adaptive sliding mode control (ASMC) scheme for robot manipulators. Since the dynamic equations of robot manipulators inevitably include model uncertainties and disturbances, time-delay estimation (TDE) errors occur when using the time-delay control (TDC) approach. Further, the ASMC method used to compensate for TDE errors naturally causes a chattering phenomenon. To improve tracking performance while reducing or maintaining chattering, this paper proposes an adaptive law based on a quasi-convex function that is convex at the origin and concave at the gain switching point, respectively. We also adopt a novel sliding variable that uses previously sampled tracking errors and their time derivatives. Further, this paper proves that the sliding variable of the robot manipulator controlled by the proposed ASMC satisfies uniformly ultimately bounded stability. The simulation and experimental results illustrate the effectiveness of the proposed methods in terms of tracking performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Situación actual y retos de futuro de los servicios deportivos municipales: percepción de los responsables deportivos.
- Author
-
de Aldama Ortuzar, Inmaculada Martínez, Cayero, Ruth, Teruelo Terreras, Bonifacio, González Bravo, Javier, and Yanci Irigoyen, Javier
- Subjects
MUNICIPAL budgets ,MANAGERS of sports teams ,MUNICIPAL services ,SAMPLING errors ,LEAD ,HABIT - Abstract
Copyright of Retos: Nuevas Perspectivas de Educación Física, Deporte y Recreación is the property of Federacion Espanola de Asociaciones de Docentes de Educacion Fisica and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
13. Sampling Error of Mean and Trend of Nighttime Air Temperature at Weather Stations Over China Using Satellite Data as Proxy.
- Author
-
Chen, Linghong and Wang, Kaicun
- Subjects
LAND surface temperature ,CLIMATE change detection ,SAMPLING errors ,URBAN heat islands ,TEMPERATURE inversions - Abstract
Meteorological observations of surface air temperature have provided fundamental data for climate change detection and attribution. However, the weather stations are unevenly distributed, and are still very sparse in remote regions. The possible sampling error is well known, but not well quantified because we are lack of the adequate and regularly distributed measurements. The high resolution of satellite land surface temperature retrieval during night time provide a nice proxy for near surface temperature as both temperatures controlled by surface longwave radiative cooling and the nocturnal temperature inversion depress land‐atmosphere turbulent exchange. The sampling error of mean value and trend were assessed by comparing station point measurements (pixel of ∼0.01°) with grid (1°) mean and national mean from 2001 to 2021. This method permits us to make the first assessment of under‐sampling error and spatial representative error on both national mean and trend of air temperature during nighttime collected at ∼2,400 weather stations over China. The sampling error in national mean temperature is more than 3°C. The under‐sampling error due to lack of observation explains two thirds and the spatial representative error due to the difference between station and grid/regional mean elevation contribute the other one third. The sampling error in trend account for one third of the national mean trend. The urban heat island effect associated with urbanization around the weather stations (spatial representative error) can explain four fifths of the sampling error in trend, which is consistent with existing studies based on air temperature collected at paired weather station. Plain Language Summary: Meteorological observations provide fundamental measurements of surface air temperature. These station‐based temperature records are usually processed to grid data sets considering the uneven distribution of meteorological stations. The gridding procedure represents a non‐uniformly sampling. In this paper, we quantified the sampling error over China using remote sensed temperature record as proxy of near surface temperature during nighttime. The causes of sampling error both in mean value and trend were both quantified. Key Points: Sampling error of mean and trend of temperature collected at weather stations at nighttime over China were investigatedThe sampling error in national mean temperature were found to be more than 3°C due to uneven distribution of weather stationsThe sampling error in temperature trend account for one thirds of the national mean temperature trend due to local urbanization [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Event‐triggered stabilization for systems with an unknown control direction.
- Author
-
Zhu, Lijun and Chen, Zhiyong
- Subjects
- *
SAMPLING errors , *NONLINEAR systems - Abstract
In this article, we present an event‐triggered control law for systems with an unknown control direction, where the adaptation dynamics of the Nussbaum gain are sampled. Specifically, we employ the emulation method and introduce a specific set of sampling errors. This approach enables the design of an event‐triggered law without encountering Zeno behavior. Additionally, we introduce a lemma to address the non‐integrability issue of the Nussbaum gain dynamics during the event‐triggered law design. We apply this design philosophy to co‐design both the continuous‐time controller and the event‐triggered law, aiming to achieve the global stabilization of systems with an unknown control direction. Finally, we validate the theoretical results through a numerical example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Leveraging gene correlations in single cell transcriptomic data.
- Author
-
Silkwood, Kai, Dollinger, Emmanuel, Gervin, Joshua, Atwood, Scott, Nie, Qing, and Lander, Arthur D.
- Subjects
- *
BIOLOGICAL variation , *REGULATOR genes , *SKEWNESS (Probability theory) , *RNA sequencing , *SAMPLING errors - Abstract
Background: Many approaches have been developed to overcome technical noise in single cell RNA-sequencing (scRNAseq). As researchers dig deeper into data—looking for rare cell types, subtleties of cell states, and details of gene regulatory networks—there is a growing need for algorithms with controllable accuracy and fewer ad hoc parameters and thresholds. Impeding this goal is the fact that an appropriate null distribution for scRNAseq cannot simply be extracted from data in which ground truth about biological variation is unknown (i.e., usually). Results: We approach this problem analytically, assuming that scRNAseq data reflect only cell heterogeneity (what we seek to characterize), transcriptional noise (temporal fluctuations randomly distributed across cells), and sampling error (i.e., Poisson noise). We analyze scRNAseq data without normalization—a step that skews distributions, particularly for sparse data—and calculate p values associated with key statistics. We develop an improved method for selecting features for cell clustering and identifying gene–gene correlations, both positive and negative. Using simulated data, we show that this method, which we call BigSur (Basic Informatics and Gene Statistics from Unnormalized Reads), captures even weak yet significant correlation structures in scRNAseq data. Applying BigSur to data from a clonal human melanoma cell line, we identify thousands of correlations that, when clustered without supervision into gene communities, align with known cellular components and biological processes, and highlight potentially novel cell biological relationships. Conclusions: New insights into functionally relevant gene regulatory networks can be obtained using a statistically grounded approach to the identification of gene–gene correlations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Efficacy, safety, and impact of fluorescein in frameless stereotactic needle biopsies – a case series.
- Author
-
Dellaretti, Marcos, de Lima, Franklin Bernardes Faraj, de Sena, Pedro Henrique Velasco Pondé, Figueiredo, Hian Penna Gavazza, Albuquerque, João Pedro Santos, Gomes, Fernando Cotrim, Dias Faria, Barbara Caroline, and de Almeida, Júlio César
- Subjects
- *
SAMPLING errors , *FLUORESCEIN , *BIOPSY , *FLUORESCENCE , *ACQUISITION of data - Abstract
Stereotactic needle biopsy stands as a crucial method for diagnosing intracranial lesions unsuitable for surgical intervention. Nonetheless, the potential for sampling errors lead to innovative approaches to enhance diagnostic precision. This study contrasts the outcomes of patients undergoing fluorescein-assisted frameless stereotactic needle biopsy with those receiving traditional biopsies to evaluate the impact on diagnostic accuracy and safety. This study included patients with contrast-enhancing intracranial lesions, comprising a prospective group undergoing fluorescein-assisted biopsies and a retrospective group undergoing conventional biopsies at the same institution. We've collected data on demographics, procedural specifics, diagnostic outcomes, and postoperative events. A comparative analysis involved 43 patients who received fluorescein-assisted biopsies against 77 patients who underwent conventional biopsies. The average age was 60.5 years. The fluorescein group exhibited a 93% success rate in diagnosis, markedly higher than the 70.1% in the non-fluorescein group (OR = 5.67; 95%IC: 1.59–20.24; p < 0.01). The rate of complications was statistically similar across both cohorts. Despite its established value, stereotactic needle biopsy is susceptible to inaccuracies and complications. The application of fluorescence-based adjuncts like 5-ALA and fluorescein has been investigated to improve diagnostic fidelity and reduce risks. These technologies potentially minimize the necessity for multiple biopsies, decrease surgical duration, and provide immediate verification of tumor presence. Fluorescein-assisted stereotactic biopsy emerges as an effective, secure alternative to conventional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Introducing prediction intervals for sample means.
- Author
-
Contini, Molly E., Spence, Jeffrey R., and Stanley, David J.
- Subjects
- *
SAMPLING errors , *DESCRIPTIVE statistics , *INFERENTIAL statistics , *RESEARCH personnel , *STANDARD deviations - Abstract
Researchers and practitioners are typically familiar with descriptive statistics and statistical inference. However, outside of regression techniques, little attention may be given to questions around prediction. In the current paper, we introduce prediction intervals using fundamental concepts that are learned in descriptive and inferential statistical training (i.e., sampling error, standard deviation). We walk through an example using simple hand calculations and reference a simple R package that can be used to calculate prediction intervals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Landslide susceptibility prediction modelling based on semi‐supervised XGBoost model.
- Author
-
Shua, Qiangqiang, Peng, Hongbin, and Li, Jingkai
- Subjects
- *
MACHINE learning , *LANDSLIDE hazard analysis , *LANDSLIDE prediction , *SAMPLING errors , *LANDSLIDES , *PROBLEM solving - Abstract
In the process of landslide susceptibility prediction (LSP) modelling, there are some problems in the model dataset relating to landslide and non‐landslide samples, such as landslide sample errors, subjective randomness and low accuracy of non‐landslide sample selection. In order to solve the above problems, a semi‐supervised machine learning model for LSP is innovatively proposed. Firstly, Yanchang County of Shanxi Province, China, is taken as the study area. Secondly, the frequency ratio values of 12 environmental factors (elevation, slope, aspect, etc.) and the randomly selected twice non‐landslides are used to form the initial model datasets. Thirdly, an extreme gradient boosting (XGBoost) model is adopted for training and testing the initial datasets, so as to produce initial landslide susceptibility maps (LSMs) which are divided into very low, low, moderate, high and very high susceptibility levels. Next, the landslide samples in initial LSMs with very low and low susceptibility levels are excluded to improve the accuracy of landslide samples, and the unlabelled twice non‐landslide samples in initial LSMs with low and very low susceptibility levels are randomly selected to ensure the accuracy of non‐landslide samples. These new obtained landslide and non‐landslide samples are reimported into XGBoost model to construct the semi‐supervised XGBoost (SSXGBoost) model. Finally, accuracy, kappa coefficient and statistical indexes of susceptibility indexes are adopted to assess the LSP performance of XGBoost and SSXGBoost models. Results show that SSXGBoost model has remarkably better LSP performance than that of XGBoost model. Conclusively, the proposed SSXGBoost model effectively overcomes the problems that the accuracy of landslide samples needs to be further improved and that non‐landslide samples are difficult to select accurately. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Unveiling the hidden impact: Wildlife roadkill assessment in the Paraguayan Chaco.
- Author
-
Martínez, Nicolás, Goossen‐Lebrón, Thomas, Bauer, Frederick, Espínola, Viviana, Ortiz, María Belén, and Gonçalves, Larissa Oliveira
- Subjects
- *
WILDLIFE conservation , *ANIMAL populations , *SAMPLING errors , *ROADKILL , *ECOLOGICAL impact - Abstract
The incidence of wildlife roadkill significantly threatens the persistence of wildlife populations and disrupts the ecological functionality of ecosystems. This study investigates the impact of roadkills on wildlife in the Paraguayan Chaco, focusing on a 250‐km segment of Route 9 'Dr. Carlos Antonio López' between Villa Hayes and Pozo Colorado. We conducted a road survey for 15 months and recorded 2338 carcasses, identifying 87 species, with mammals (41.3%), reptiles (32.3%) and birds (19.8%) being the most observed groups. The species most frequently killed included Cerdocyon thous, Caracara plancus, Thamnodynastes hypoconia and Procyon cancrivorus. We also recorded species with conservation concern. Additionally, we estimated mortality rates by accounting for sampling errors such as carcass removal and searcher efficiency, revealing annual roadkill rates of 5183 mammals, 19 402 birds and 5020 reptiles on the 250 km per year. Spatial analysis using Ripley's K statistic and HotSpot Identification highlighted significant variation in roadkill distribution across different taxonomic groups and seasons, with 51 km of road identified as hotspots when analysing all groups together. Notably, there was minimal overlap in hotspot locations between seasons and taxonomic groups, emphasizing the need for targeted mitigation strategies. Our findings challenge previous macroecological assessments suggesting low roadkill rates in Paraguay, underscoring the importance of local studies in accurately assessing ecological impacts. This study provides critical baseline data for conservation efforts and calls for further research to develop and implement effective roadkill mitigation strategies in Latin America, especially in Chaco region. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. An Application of a Small Area Procedure with Correlation Between Measurement Error and Sampling Error to the Conservation Effects Assessment Project.
- Author
-
Berg, Emily and Mosaferi, Sepideh
- Subjects
- *
MEASUREMENT errors , *SAMPLING errors , *SAMPLE size (Statistics) , *RUNOFF , *POCKETKNIVES - Abstract
County level estimates of mean sheet and rill erosion from the Conservation Effects Assessment Project (CEAP) are useful for program development and evaluation. Since county sample sizes in the CEAP survey are insufficient to support reliable direct estimators, small area estimation procedures are needed. The quantity of water runoff is a useful covariate but is unavailable for the full population. We use an estimate of mean runoff from the CEAP survey as a covariate in a small area model with sheet and rill erosion as the response. As the runoff and sheet and rill erosion are estimators from the same survey, the measurement error in the covariate is important as is the correlation between the measurement error and the sampling error. We conduct a detailed investigation of small area estimation in the presence of a correlation between the measurement error in the covariate and the sampling error in the response. In simulations, the proposed predictor is superior to small area predictors that assume the response and covariate are uncorrelated or that ignore the measurement error entirely. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Hybrid Fuzzy Method for Performance Evaluation of City Construction.
- Author
-
Yang, Chun-Ming, Hsu, Chang-Hsien, Chen, Tian, and Li, Shiyao
- Subjects
- *
CITIES & towns , *URBAN planning , *SAMPLING errors , *PARAMETER estimation , *SUSTAINABLE urban development - Abstract
Evaluating the performance of city construction not only helps optimize city functions and improve city quality, but it also contributes to the development of sustainable cities. However, most of the scoring rules for evaluating the performance of city construction are overly cumbersome and demand very high data integrity. Moreover, the properties, change scale, and scope of different evaluation indicators of city construction often lead to uncertain and ambiguous results. In this study, a hybrid fuzzy method is proposed to conduct the performance evaluation of city construction in two phases. Firstly, a city performance index (CPI) was developed by combining the means and standard deviations of indicators of city construction to address the volatility of historical statistical data as well as different types of data. Considering the sampling errors in data analysis, the parameter estimation method was used to derive the 100% × (1 − α) confidence interval of the CPI. Buckley's fuzzy approach was then adopted to extend the statistical estimators from the CPI into fuzzy estimators, after which a fuzzy CPI was proposed. To identify the specific improvement directions for city construction, the fuzzy axiom design (fuzzy AD) method was applied to explore the relationship between the targets set by city managers and actual performance. Finally, an example of six cities in China is provided to illustrate the effectiveness and practicality of the proposed method. The results show that the performance of Chongqing on several evaluation indicators is lower than that of other cities. The proposed method takes into account the issues of uniformity and diversity in the performance evaluation of city construction. It can enable a quantitative assessment of the city construction level in all cities and provide theoretical support and a decision-making basis for relevant government departments to optimize city construction planning and scientifically formulate city construction policies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. The acute effects of motor cortex transcranial direct current stimulation on athletic performance in healthy adults: A systematic review and meta‐analysis.
- Author
-
Winker, Matteo, Hoffmann, Sven, Laborde, Sylvain, and Javelle, Florian
- Subjects
- *
TRANSCRANIAL direct current stimulation , *BRAIN stimulation , *MOTOR cortex , *ATHLETIC ability , *SAMPLING errors - Abstract
This systematic review and meta‐analysis assesses independently the acute effects of anodal and cathodal motor cortex transcranial direct current stimulation (tDCS) on athletic performance in healthy adults. Besides, it evaluates the unique and conjoint effects of potential moderators (i.e., stimulation parameters, exercise type, subjects' training status and risk of bias). Online database search was performed from inception until March 18th 2024 (PROSPERO: CRD42023355461). Forty‐three controlled trials were included in the systematic review, 40 in the anodal tDCS meta‐analysis (68 effects), and 9 (11 effects) in the cathodal tDCS meta‐analysis. Performance enhancement between pre‐ and post‐stimulation was the main outcome measure considered. The anodal tDCS effects on physical performance were small to moderate (g =.29, 95%CI [.18,.40], PI = −.64 to 1.23, I2 = 64.0%). Exercise type, training status and use of commercial tDCS were significant moderators of the results. The cathodal tDCS effects were null (g =.04, 95%CI [−.05,.12], PI = −.14 to.23, I2 = 0%), with a small to moderate heterogeneity entirely due to sampling error, thus impairing further moderator analysis. These findings hold significant implications for the field of brain stimulation and physical performance, as they not only demonstrate a small to moderate effect of acute tDCS but also identify specific categories of individuals, devices and activities that are more susceptible to improvements. By addressing the multidimensional factors influencing the mechanisms of tDCS, we also provide suggestions for future research. Results indicate a small favouring effect of anodal tDCS on athletic performance with large heterogeneity. Exercise type, training status and use of commercial tDCS significantly moderate anodal tDCS effects. Cathodal tDCS shows null effects, with small to moderate heterogeneity entirely due to sampling error. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Investigation and quantification of composition variability in urinary stone analysis.
- Author
-
Binh Duy Le, Kyung-Jin Oh, Anh Tuan Le, Long Hoang, and Ilwoo Park
- Subjects
- *
FOURIER transform infrared spectroscopy , *URINARY calculi , *SAMPLING errors , *UNIVERSITY hospitals , *SAMPLE size (Statistics) - Abstract
Purpose: To investigate the variability in urinary stone composition analysis due to sampling and suggest potential solutions. Materials and Methods: We collected 1,135 stone fragments from 149 instances that had undergone a stone removal at Hanoi Medical University Hospital from January 2022 to August 2022. Each fragment was ground into fine powder and divided into separate specimens if the amount was abundant. For composition analyzing every specimen, Fourier transform infrared spectroscopy was performed. The composition of a given fragment was the average of its belonging specimens. The variability in composition was assessed on the fragment level (i.e., between fragments of an instance). We defined an instance as "significantly variable" if the maximum difference in any composition across its belonging fragments was equal to or greater than a given threshold. Results: On average, there were 7.6±3.3 stone fragments per instance and 2.3±0.5 specimens per fragment. We found that the variability could be substantial on the fragment level. Eighty-nine (69.5%) and 70 (54.7%) out of 128 multiple-component instances were significantly variable if the threshold was set at 20% and 30%, respectively. The variability of an instance on the fragment level was correlated with the size of fragment and the number of components. Conclusions: Our study demonstrated the significant variability in urinary stone composition and showed that it correlated with the size and the impurity of samples. Mapping denotation while sampling and analyzing as well as reporting the composition of individual fragments could be valuable to reduce potential variability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Event-triggered consensus control for a class of uncertain multiple Euler-Lagrange systems with actuator faults.
- Author
-
Li, Jian, Zhao, Wei, Liang, Yuqi, Wu, Zhaojing, and Zhao, Le
- Subjects
BACKSTEPPING control method ,EULER-Lagrange system ,CLOSED loop systems ,SAMPLING errors ,MULTIAGENT systems ,ADAPTIVE control systems - Abstract
This paper is devoted to the event-triggered consensus control for a class of uncertain multiple Euler-Lagrange (EL) systems with actuator faults. Different from the related works where strict conditions are imposed on system uncertainties and the measurements of the leader's output, more serious uncertainties are involved since all the system dynamic matrices are unknown while both actuator faults and external disturbance are considered; and moreover, fewer measurements of the leader's output are required since its time derivatives are not necessarily available for feedback. Mainly because of these, the consensus problem is hard to solve by straightforwardly extending the existing results. To solve the control problem, a dynamic gain with a smart choice of its updating law is introduced to overcome the serious uncertainties and the sampling error of the control signal. By incorporating the dynamic gain into the vectorial backstepping procedure, an adaptive consensus controller joined with an event-triggered mechanism is designed for each follower to ensure the consensus of the multi-agent system in the sense that all the states of the closed-loop system are bounded while the output of each follower tracks the leader's output. Finally, the effectiveness of the proposed method is verified by a simulation example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. PREDICTION METHOD OF RATE OF PENETRATION BASED ON FUZZY SUPPORT VECTOR REGRESSION.
- Author
-
LI YANG, LISHEN WANG, LILI BAI, and WENFENG SUN
- Subjects
MACHINE learning ,GOODNESS-of-fit tests ,PREDICTION models ,SAMPLING errors ,FORECASTING - Abstract
Predicting the rate of penetration (ROP) is important for optimizing drilling parameters, improving drilling efficiency, and optimizing economic benefits throughout the drilling process. The current prediction model of ROP based on machine learning algorithms does not consider the interference of outliers. Therefore, in this study, we propose a method to predict ROP based on fuzzy support vector regression (FSVR). First, appropriate input parameters were selected from the controllable parameters. Second, based on the local outlier factor, a fuzzy membership degree was assigned to each sample. Finally, the sample with the fuzzy membership value was input into the model for ROP prediction. The results demonstrated that the goodness of fit (R2) of the improved FSVR model is 0.9634, and the mean absolute error is 0.1974. Compared with standard SVR and other models, the improved FSVR model has a stronger anti-interference ability, smaller prediction error for normal samples, and higher accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Estimation of Standard Error, Linking Error, and Total Error for Robust and Nonrobust Linking Methods in the Two-Parameter Logistic Model.
- Author
-
Robitzsch, Alexander
- Subjects
ITEM response theory ,STATISTICAL models ,INFERENTIAL statistics ,MODEL theory ,SAMPLING errors - Abstract
The two-parameter logistic (2PL) item response theory model is a statistical model for analyzing multivariate binary data. In this article, two groups are brought onto a common metric using the 2PL model using linking methods. The linking methods of mean–mean linking, mean–geometric–mean linking, and Haebara linking are investigated in nonrobust and robust specifications in the presence of differential item functioning (DIF). M-estimation theory is applied to derive linking errors for the studied linking methods. However, estimated linking errors are prone to sampling error in estimated item parameters, thus resulting in artificially increased the linking error estimates in finite samples. For this reason, a bias-corrected linking error estimate is proposed. The usefulness of the modified linking error estimate is demonstrated in a simulation study. It is shown that a simultaneous assessment of the standard error and linking error in a total error must be conducted to obtain valid statistical inference. In the computation of the total error, using the bias-corrected linking error estimate instead of the usually employed linking error provides more accurate coverage rates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Understanding the sources of error in MBAR through asymptotic analysis.
- Author
-
Li, Xiang Sherry, Van Koten, Brian, Dinner, Aaron R., and Thiede, Erik H.
- Subjects
- *
CENTRAL limit theorem , *MARKOV processes , *MOLECULAR dynamics , *ISOMERIZATION , *SAMPLING errors , *ERROR analysis in mathematics - Abstract
Many sampling strategies commonly used in molecular dynamics, such as umbrella sampling and alchemical free energy methods, involve sampling from multiple states. The Multistate Bennett Acceptance Ratio (MBAR) formalism is a widely used way of recombining the resulting data. However, the error of the MBAR estimator is not well-understood: previous error analyses of MBAR assumed independent samples. In this work, we derive a central limit theorem for MBAR estimates in the presence of correlated data, further justifying the use of MBAR in practical applications. Moreover, our central limit theorem yields an estimate of the error that can be decomposed into contributions from the individual Markov chains used to sample the states. This gives additional insight into how sampling in each state affects the overall error. We demonstrate our error estimator on an umbrella sampling calculation of the free energy of isomerization of the alanine dipeptide and an alchemical calculation of the hydration free energy of methane. Our numerical results demonstrate that the time required for the Markov chain to decorrelate in individual states can contribute considerably to the total MBAR error, highlighting the importance of accurately addressing the effect of sample correlation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Reducing the statistical error of generative adversarial networks using space‐filling sampling.
- Author
-
Wang, Sumin, Gao, Yuyou, Zhou, Yongdao, Pan, Bin, Xu, Xia, and Li, Tao
- Subjects
- *
GENERATIVE adversarial networks , *STATISTICAL errors , *SAMPLING errors - Abstract
This paper introduces a novel approach to reducing statistical errors in generative models, with a specific focus on generative adversarial networks (GANs). Inspired by the error analysis of GANs, we find that statistical errors mainly arise from random sampling, leading to significant uncertainties in GANs. To address this issue, we propose a selective sampling mechanism called space‐filling sampling. Our method aims to increase the sampling probability in areas with insufficient data, thereby improving the learning performance of the generator. Theoretical analysis confirms the effectiveness of our approach in reducing statistical errors and accelerating convergence in GANs. This research represents a pioneering effort in targeting the reduction of statistical errors in GANs, and it demonstrates the potential for enhancing the training of other generative models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Fuzzy judgement model for assessment of improvement effectiveness to performance of processing characteristics.
- Author
-
Chen, Kuen-Suan, Lai, Yuan-Lung, Huang, Ming-Chieh, and Chang, Tsang-Chuan
- Subjects
JUDGMENT (Psychology) ,PROCESS capability ,CONFIDENCE intervals ,SAMPLING errors ,MANUFACTURED products ,QUALITY function deployment ,PRODUCT quality - Abstract
Maintaining high levels of process quality is crucial to the competitiveness of manufacturing firms in today's increasingly global marketplace. To ensure the quality of manufactured products meets customer needs, process capability indices (PCIs) are widely used to analyze the process performance of various processing characteristics. Products characterise by processing characteristics of both unilateral and bilateral specifications are common in the current sales market. Manufacturing firms must often adopt multiple PCIs to analyze the process performance of a single product, which is inefficient in practical applications and management. Yield-based index C p k is not subject to this limitation. For this reason, we employed C p k to evaluate process performance and the effectiveness of improvement measures. In practice, C p k is estimated from samples, which means that misjudgment may occur in the assessment of process performance and improvement effectiveness due to sampling errors. We therefore derived the 100 (1 − α) % confidence interval of C p k and, based on the producer's perspective, used the upper confidence limit to evaluate improvement effectiveness. To lower the risk of misjudgment and increase the reliability of improvement effectiveness in the case of data uncertainty, this paper further proposes fuzzy estimation using the right-sided confidence interval of C p k and develops the fuzzy judgement model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Neural learning control for sampled‐data nonlinear systems based on Euler approximation and first‐order filter.
- Author
-
Liang, Dengxiang and Wang, Min
- Subjects
- *
RADIAL basis functions , *APPROXIMATION error , *NONLINEAR systems , *ADAPTIVE control systems , *COMPUTATIONAL complexity , *SAMPLING errors - Abstract
The primary focus of this research paper is to explore the realm of dynamic learning in sampled‐data strict‐feedback nonlinear systems (SFNSs) by leveraging the capabilities of radial basis function (RBF) neural networks (NNs) under the framework of adaptive control. First, the exact discrete‐time model of the continuous‐time system is expressed as an Euler strict‐feedback model with a sampling approximation error. We provide the consistency condition that establishes the relationship between the exact model and the Euler model with meticulous detail. Meanwhile, a novel lemma is derived to show the stability condition of a digital first‐order filter. To address the non‐causality issues of SFNSs with sampling approximation error and the input data dimension explosion of NNs, the auxiliary digital first‐order filter and backstepping technology are combined to propose an adaptive neural dynamic surface control (ANDSC) scheme. Such a scheme avoids the n$$ n $$‐step time delays associated with the existing NN updating laws derived by the common n$$ n $$‐step predictor technology. A rigorous recursion method is employed to provide a comprehensive verification of the stability, guaranteeing its overall performance and dependability. Following that, the NN weight error systems are systematically decomposed into a sequence of linear time‐varying subsystems, allowing for a more detailed analysis and understanding. In order to ensure the recurrent nature of the input variables, a recursive design is employed, thereby satisfying the partial persistent excitation condition specifically designed for the RBF NNs. Meanwhile, it can verify that the NN estimated weights converge to their ideal values. Compared with the common n$$ n $$‐step predictor technology, there is no need to redesign the learning rules due to the designed NN weight updating laws without time delays. Subsequently, after capturing and storing the convergence weights, a novel neural learning dynamic surface control (NLDSC) scheme is specifically formulated by leveraging the acquired knowledge. The introduced methodology reduces computational complexity and facilitates practical implementation. Finally, empirical evidence obtained from simulation experiments validates the efficacy and viability of the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Overcoming the High Error Rate of Composite DNA Letters‐Based Digital Storage through Soft‐Decision Decoding.
- Author
-
Xu, Yaping, Ding, Lulu, Wu, Shigang, and Ruan, Jue
- Subjects
- *
DECODING algorithms , *SAMPLING errors , *ERROR rates , *INFORMATION retrieval , *NUCLEOTIDES - Abstract
Composite DNA letters, by merging all four DNA nucleotides in specified ratios, offer a pathway to substantially increase the logical density of DNA digital storage (DDS) systems. However, these letters are susceptible to nucleotide errors and sampling bias, leading to a high letter error rate, which complicates precise data retrieval and augments reading expenses. To address this, Derrick‐cp is introduced as an innovative soft‐decision decoding algorithm tailored for DDS utilizing composite letters. Derrick‐cp capitalizes on the distinctive error sensitivities among letters to accurately predict and rectify letter errors, thus enhancing the error‐correcting performance of Reed‐Solomon codes beyond traditional hard‐decision decoding limits. Through comparative analyses in the existing dataset and simulated experiments, Derrick‐cp's superiority is validated, notably halving the sequencing depth requirement and slashing costs by up to 22% against conventional hard‐decision strategies. This advancement signals Derrick‐cp's significant role in elevating both the precision and cost‐efficiency of composite letter‐based DDS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. CFD Research on Natural Gas Sampling in a Horizontal Pipeline.
- Author
-
Wu, Mingou, Chen, Yanling, Liu, Qisong, Xiao, Le, Fan, Rui, Li, Linfeng, Xiao, Xiaoming, Sun, Yongli, and Yan, Xiaoqin
- Subjects
- *
NATURAL gas pipelines , *NATURAL gas , *SAMPLING errors , *GRAVITY , *COMPUTER simulation - Abstract
Accurately determining if the sample parameters from a natural gas pipeline's sampling system reflect the fluid characteristics of the main pipe has been a significant industry concern for many years. In this paper, samples of natural gas in a horizontal pipeline are investigated. CFD is used in this work and the turbulence is considered in the simulation. Firstly, the critical diameter for particles affected by gravity within such pipeline is determined. And then, the effects of the operation pressure and velocity of sampling branches on sample parameters, and the influence of particle density on these sample parameters, are analyzed. Finally, four different structures of sample branches for natural gas in a horizontal pipeline are compared. It is found that 100 μm is the critical diameter at which particles are affected by gravity; the operating pressure of the sampling branch has a significant impact on the particle mass concentration. The particle density has little impact on the sampling system. Overall, the design of the sampling branches does not cause significant sampling errors. This study provides guidance for optimal sampling in existing natural gas pipelines and enables effective monitoring of particle impurity content and properties in natural gas. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Silence is golden, but my measures still see—why cheaper-but-noisier outcome measures in large simple trials can be more cost-effective than gold standards.
- Author
-
Woolf, Benjamin, Pedder, Hugo, Rodriguez-Broadbent, Henry, and Edwards, Phil
- Subjects
- *
MEASUREMENT errors , *RANDOMIZED controlled trials , *SAMPLING errors , *JUDGMENT (Psychology) , *RESEARCH personnel , *SELECTION bias (Statistics) , *PERCENTILES - Abstract
Objective: To assess the cost-effectiveness of using cheaper-but-noisier outcome measures, such as a short questionnaire, for large simple clinical trials. Background: To detect associations reliably, trials must avoid bias and random error. To reduce random error, we can increase the size of the trial and increase the accuracy of the outcome measurement process. However, with fixed resources, there is a trade-off between the number of participants a trial can enrol and the amount of information that can be collected on each participant during data collection. Methods: To consider the effect on measurement error of using outcome scales with varying numbers of categories, we define and calculate the variance from categorisation that would be expected from using a category midpoint; define the analytic conditions under which such a measure is cost-effective; use meta-regression to estimate the impact of participant burden, defined as questionnaire length, on response rates; and develop an interactive web-app to allow researchers to explore the cost-effectiveness of using such a measure under plausible assumptions. Results: An outcome scale with only a few categories greatly reduced the variance of non-measurement. For example, a scale with five categories reduced the variance of non-measurement by 96% for a uniform distribution. We show that a simple measure will be more cost-effective than a gold-standard measure if the relative increase in variance due to using it is less than the relative increase in cost from the gold standard, assuming it does not introduce bias in the measurement. We found an inverse power law relationship between participant burden and response rates such that a doubling the burden on participants reduces the response rate by around one third. Finally, we created an interactive web-app (https://benjiwoolf.shinyapps.io/cheapbutnoisymeasures/) to allow exploration of when using a cheap-but-noisy measure will be more cost-effective using realistic parameters. Conclusion: Cheaper-but-noisier questionnaires containing just a few questions can be a cost-effective way of maximising power. However, their use requires a judgement on the trade-off between the potential increase in risk of information bias and the reduction in the potential of selection bias due to the expected higher response rates. Key messages: A cheaper-but-noisier outcome measure, like a short form questionnaire, is a more cost-effective method of maximising power in large simple clinical trials than an error free gold standard measure when the percentage increase in noise from using the cheaper-but-noisier measure is less than the relative difference in the cost of administering the two measures. We have created an R-shiny app to facilitate the exploration of when this condition is met at https://benjiwoolf.shinyapps.io/cheapbutnoisymeasures/ Cheaper-but-noisier outcome measures are more likely to introduce information bias than a gold standard but may reduce selection bias because they reduce loss-to-follow-up. Researchers therefore need to form a judgement about the relative increase or decrease in bias before using a cheap-but-noisy measure. We encourage the development and validation of short form questionnaires to enable the use of high quality cheaper-but-noisier outcome measures in randomised controlled trials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Identification of exon regions in eukaryotes using fine-tuned variational mode decomposition based on kurtosis and short-time discrete Fourier transform.
- Author
-
Jayasree, K., Kumar Hota, Malaya, Dwivedi, Atul Kumar, Ranjan, Himanshuram, and Srivastava, Vinay Kumar
- Subjects
- *
ADAPTIVE filters , *SAMPLING errors , *NUCLEOTIDE sequence , *KURTOSIS , *DNA sequencing , *DISCRETE Fourier transforms - Abstract
AbstractIn genomic research, identifying the exon regions in eukaryotes is the most cumbersome task. This article introduces a new promising model-independent method based on short-time discrete Fourier transform (ST-DFT) and fine-tuned variational mode decomposition (FTVMD) for identifying exon regions. The proposed method uses the
N /3 periodicity property of the eukaryotic genes to detect the exon regions using the ST-DFT. However, background noise is present in the spectrum of ST-DFT since the sliding rectangular window produces spectral leakage. To overcome this, FTVMD is proposed in this work. VMD is more resilient to noise and sampling errors than other decomposition techniques because it utilizes the generalization of the Wiener filter into several adaptive bands. The performance of VMD is affected due to the improper selection of the penalty factor (α ), and the number of modes (K ). Therefore, in fine-tuned VMD, the parameters of VMD (K andα ) are optimized by maximum kurtosis value. The main objective of this article is to enhance the accuracy in the identification of exon regions in a DNA sequence. At last, a comparative study demonstrates that the proposed technique is superior to its counterparts. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
35. Noninvasive Tests to Assess Fibrosis and Disease Severity in Metabolic Dysfunction-Associated Steatotic Liver Disease.
- Author
-
Tincopa, Monica A. and Loomba, Rohit
- Subjects
- *
LIVER biopsy , *DISEASE progression , *SAMPLING errors , *CLINICAL medicine , *LIVER diseases - Abstract
Risk of disease progression and clinical outcomes in metabolic dysfunction-associated steatotic liver disease (MASLD) is associated with fibrosis stage and presence of "at-risk metabolic dysfunction-associated steatohepatitis (MASH)." Although liver biopsy is considered the gold standard to diagnose MASH and stage of fibrosis, biopsy is infrequently performed in clinical practice and has associated sampling error, lack of interrater reliability, and risk for procedural complications. Noninvasive tests (NITs) are routinely used in clinical practice for risk stratification of patients with MASLD. Several NITs are being developed for detecting "at-risk MASH" and cirrhosis. Clinical care guidelines apply NITs to identify patients needing subspecialty referral. With recently approved Food and Drug Administration treatment for MASH and additional emerging pharmacotherapy, NITs will identify patients who will most benefit from treatment, monitor treatment response, and assess risk for long-term clinical outcomes. In this review, we examine the performance of NITs to detect "at-risk MASH," fibrosis stage, response to treatment, and risk of clinical outcomes in MASLD and MASH. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. System error analysis and target localization of distributed pulse radars.
- Author
-
Su, Yihan, Wang, Lei, Wang, Zongyu, Liu, Yimin, and Wang, Xiqin
- Subjects
- *
RADAR signal processing , *OPTIMIZATION algorithms , *RADAR targets , *PARAMETER estimation , *SAMPLING errors - Abstract
Target localization for distributed radars requires precise knowledge of the calibration errors in time, frequency, and space. Previous studies focus on the sensor location error and sensor directional error in space. This paper concentrates on the system errors in time and frequency which directly affect the targets estimation of range and velocity. Based on linear frequency modulation waveform, the impact of the system errors including sample interval, pulse repetition interval, and centre frequency are analysed. An alternating optimization algorithm is proposed to achieve the target localization, velocity estimation and system errors estimation simultaneously. Simulation shows the effectiveness of the algorithm and the influence on localization due to varying number of targets and radars. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Fuzzy Evaluation Model for Critical Components of Machine Tools.
- Author
-
Chen, Kuen-Suan, Yao, Kai-Chao, Cheng, Chien-Hsin, Yu, Chun-Min, and Chang, Chen-Hsu
- Subjects
- *
SIX Sigma , *SAMPLING errors , *MACHINE tools , *DATA analytics , *TECHNOLOGICAL innovations , *BIG data - Abstract
The rapid progression of emerging technologies like the Internet of Things (IoT) and Big Data analytics for manufacturing has driven innovation across various industries worldwide. Production data are utilized to construct a model for quality evaluation and analysis applicable to components processed by machine tools, ensuring process quality for critical components and final product quality for the machine tools. Machine tool parts often encompass several quality characteristics concurrently, categorized into three types: smaller-the-better, larger-the-better, and nominal-the-better. In this paper, an evaluation index for the nominal-the-better quality characteristic was segmented into two single-sided Six Sigma quality indexes. Furthermore, the process quality of the entire component product was assessed by n single-sided Six Sigma quality indexes. According to numerous studies, machine tool manufacturers conventionally base their decisions on small sample sizes (n), considering timeliness and costs. However, this often leads to inconsistent evaluation results due to significant sampling errors. Therefore, this paper established fuzzy testing rules using the confidence intervals of the q single-sided Six Sigma quality indices, serving as the fuzzy quality evaluation model for components of machine tools. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Quantifying the sampling error on burn counts in Monte-Carlo wildfire simulations using Poisson and Gamma distributions.
- Author
-
Waeselynck, Valentin, Johnson, Gary, Schmidt, David, Moritz, Max A., and Saah, David
- Subjects
- *
STATISTICAL power analysis , *GAMMA distributions , *POISSON distribution , *WILDFIRES , *SQUARE root , *SAMPLING errors - Abstract
This article provides a precise, quantitative description of the sampling error on burn counts in Monte-Carlo wildfire simulations - that is, the prediction variability introduced by the fact that the set of simulated fires is random and finite. We show that the marginal burn counts are (very nearly) Poisson-distributed in typical settings and infer through Bayesian updating that Gamma distributions are suitable summaries of the remaining uncertainty. In particular, the coefficient of variation of the burn count is equal to the inverse square root of its expected value, and this expected value is proportional to the number of simulated fires multiplied by the asymptotic burn probability. From these results, we derive practical guidelines for choosing the number of simulated fires and estimating the sampling error. Notably, the required number of simulated years is expressed as a power law. Such findings promise to relieve fire modelers of resource-consuming iterative experiments for sizing simulations and assessing their convergence: statistical theory provides better answers, faster. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. An Unsupervised Error Detection Methodology for Detecting Mislabels in Healthcare Analytics.
- Author
-
Zhou, Pei-Yuan, Lum, Faith, Wang, Tony Jiecao, Bhatti, Anubhav, Parmar, Surajsinh, Dan, Chen, and Wong, Andrew K. C.
- Subjects
- *
DATA analytics , *KNOWLEDGE base , *DATABASES , *CLUSTER sampling , *SAMPLING errors , *MEDICAL care - Abstract
Medical datasets may be imbalanced and contain errors due to subjective test results and clinical variability. The poor quality of original data affects classification accuracy and reliability. Hence, detecting abnormal samples in the dataset can help clinicians make better decisions. In this study, we propose an unsupervised error detection method using patterns discovered by the Pattern Discovery and Disentanglement (PDD) model, developed in our earlier work. Applied to the large data, the eICU Collaborative Research Database for sepsis risk assessment, the proposed algorithm can effectively discover statistically significant association patterns, generate an interpretable knowledge base for interpretability, cluster samples in an unsupervised learning manner, and detect abnormal samples from the dataset. As shown in the experimental result, our method outperformed K-Means by 38 % on the full dataset and 47 % on the reduced dataset for unsupervised clustering. Multiple supervised classifiers improve accuracy by an average of 4 % after removing abnormal samples by the proposed error detection approach. Therefore, the proposed algorithm provides a robust and practical solution for unsupervised clustering and error detection in healthcare data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Correction: Vecchioni et al. Multi-Locus Phylogenetic Analyses of the Almadablennius Clade Reveals Inconsistencies with the Present Taxonomy of Blenniid Fishes. Diversity 2022, 14 , 53.
- Author
-
Vecchioni, Luca, Ching, Andrew C., Marrone, Federico, Arculeo, Marco, Hundt, Peter J., and Simons, Andrew M.
- Subjects
- *
SAMPLING errors , *CLASSIFICATION of fish , *BAYESIAN field theory , *RESEARCH personnel , *PHYLOGENY - Abstract
This document is a correction notice for a published publication on the taxonomy of blenniid fishes. It addresses errors in the identification of samples and localities for GenBank sequences. The corrected sequences are provided, and changes to specimen identification and sampling locality are explained. The authors state that these corrections do not affect the scientific conclusions of the study. The document also includes a figure and a table that have been updated to reflect the corrections. The table lists various species of fish, their scientific names, previous classification, and the locations where they were found. The authors of the document are Luca Vecchioni, Andrew C. Ching, Federico Marrone, Marco Arculeo, Peter J. Hundt, and Andrew M. Simons. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
41. Comparing Egocentric and Sociocentric Centrality Measures in Directed Networks.
- Author
-
An, Weihua
- Subjects
- *
MEASUREMENT errors , *SAMPLING errors , *EXPERIMENTAL design , *RECIPROCITY (Psychology) , *DENSITY - Abstract
Egocentric networks represent a popular research design for network research. However, to what extent and under what conditions egocentric network centrality can serve as reasonable substitutes for their sociocentric counterparts are important questions to study. The answers to these questions are uncertain simply because of the large variety of networks. Hence, this paper aims to provide exploratory answers to these questions by analyzing both empirical and simulated data. Through analyses of various empirical networks (including some classic albeit small ones), this paper shows that egocentric betweenness approximates sociocentric betweenness quite well (the correlation is high across almost all the networks being examined) while egocentric closeness approximates sociocentric closeness only reasonably well (the correlation is a bit lower on average with a larger variance across networks). Simulations also confirm this finding. Analyses further show that egocentric approximations of betweenness and closeness seem to work well in different types of networks (as featured by network size, density, centralization, reciprocity, transitivity, and geodistance). Lastly, the paper briefly presents three ideas to help improve egocentric approximations of centrality measures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A Sample Size Formula for Network Scale-up Studies.
- Author
-
Josephs, Nathaniel, Feehan, Dennis M., and Crawford, Forrest W.
- Subjects
- *
STATISTICAL accuracy , *SAMPLING errors , *BUS drivers , *SAMPLE size (Statistics) , *SEX workers - Abstract
The network scale-up method (NSUM) is a survey-based method for estimating the number of individuals in a hidden or hard-to-reach subgroup of a general population. In NSUM surveys, sampled individuals report how many others they know in the subpopulation of interest (e.g. "How many sex workers do you know?") and how many others they know in subpopulations of the general population (e.g. "How many bus drivers do you know?"). NSUM is widely used to estimate the size of important sociological and epidemiological risk groups, including men who have sex with men, sex workers, HIV+ individuals, and drug users. Unlike several other methods for population size estimation, NSUM requires only a single random sample and the estimator has a conveniently simple form. Despite its popularity, there are no published guidelines for the minimum sample size calculation to achieve a desired statistical precision. Here, we provide a sample size formula that can be employed in any NSUM survey. We show analytically and by simulation that the sample size controls error at the nominal rate and is robust to some forms of network model mis-specification. We apply this methodology to study the minimum sample size and relative error properties of several published NSUM surveys. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Does Confusion Really Hurt Novel Class Discovery?
- Author
-
Chi, Haoang, Yang, Wenjing, Liu, Feng, Lan, Long, Qin, Tao, and Han, Bo
- Subjects
- *
SAMPLING errors , *SUPERVISED learning , *DATA scrubbing , *SAMPLING (Process) - Abstract
When sampling data of specific classes (i.e., known classes) for a scientific task, collectors may encounter unknown classes (i.e., novel classes). Since these novel classes might be valuable for future research, collectors will also sample them and assign them to several clusters with the help of known-class data. This assigning process is known as novel class discovery (NCD). However, category confusion is common in the sampling process and may make the NCD unreliable. To tackle this problem, this paper introduces a new and more realistic setting, where collectors may misidentify known classes and even confuse known classes with novel classes—we name it NCD under unreliable sampling (NUSA). We find that NUSA will empirically degrade existing NCD methods if taking no care of sampling errors. To handle NUSA, we propose an effective solution, named hidden-prototype-based discovery network (HPDN): (1) we try to obtain relatively clean data representations even with the confusedly sampled data; (2) we propose a mini-batch K-means variant for robust clustering, alleviating the negative impact of residual errors embedded in the representations by detaching the noisy supervision timely. Experiments demonstrate that, under NUSA, HPDN significantly outperforms competitive baselines (e.g., 6 % more than the best baseline on CIFAR-10) and remains robust when encountering serious sampling errors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. 典型喀斯特小流域不同降雨等级水化学最优采样频率初探.
- Author
-
方长敏, 彭 韬, 张志才, 徐少强, 莫小妹, 翟 奖, and 蒋卫威
- Subjects
RAINFALL ,WATER chemistry ,RAINFALL frequencies ,KARST ,SAMPLING errors ,RAINSTORMS - Abstract
Copyright of Journal of Soil & Water Conservation (1009-2242) is the property of Institute of Soil & Water Conservation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
45. Patient Safety Climate in the Hospital Setting: Perception of Nursing Professionals.
- Author
-
Herrera, Claire Nierva and Guirardello, Edinêis de Brito
- Subjects
CORPORATE culture ,CROSS-sectional method ,BAR codes ,PATIENT safety ,MEDICAL quality control ,RESEARCH funding ,ACADEMIC medical centers ,DATA analysis ,QUESTIONNAIRES ,KRUSKAL-Wallis Test ,DESCRIPTIVE statistics ,MANN Whitney U Test ,NURSES' attitudes ,ANALYSIS of variance ,STATISTICS ,DATA analysis software ,SAMPLING errors - Abstract
Background: In global health crises, there is a heightened risk to patient and professional safety. Several studies have evaluated the safety climate, revealing different perceptions among healthcare professionals, often influenced by demographic characteristics. This study aimed to assess the percentage of problematic responses (PPR) for the patient safety climate dimensions and... [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Enhancing organizational sustainable innovation performance through organizational readiness for big data analytics.
- Author
-
Arshad, Muhammad, Qadir, Aneela, Ahmad, Waqar, and Rafique, Muhammad
- Subjects
PREPAREDNESS ,BIG data ,ORGANIZATIONAL performance ,STRUCTURAL equation modeling ,SAMPLING errors ,DEVELOPING countries - Abstract
Organizations must employ big data analytics to maintain sustained innovation in the highly dynamic and evolving business landscape. Even though BDA has a transformative power to revolutionize how businesses do things and engage with their customers' adopting BDA has faced significant challenges, especially in developing countries. This research aims to create a theoretical framework to understand how organizational readiness for BDA can influence sustainable innovation performance. Sampling errors were mitigated through a time-lagged study design, and the data was collected in three phases. The test results using Partial Least Squares Structural Equation Modeling show that organizational readiness is a critical mediator, establishing a robust chain between BDA skills and sustainable innovation performance. The results of this study imply the need for organizational foundation and alignment, which are critical to the compelling strategic deployment of BDA for sustainability innovation performance. Thus, this study can offer a valuable contribution to this topic in the future and a profound implication of the phenomenon at receptive stages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. A general matrix decomposition approach with application to stabilization of networked systems with stochastic sampling and two‐channel deception attacks.
- Author
-
Li, Yizhen, Hu, Zhipei, Deng, Feiqi, Su, Yongkang, and Li, Guangjie
- Subjects
- *
MATRIX decomposition , *MONTE Carlo method , *DECEPTION , *STOCHASTIC systems , *DISCRETE systems , *SAMPLING errors , *EXPONENTIAL stability - Abstract
This study is concerned with the stabilization analysis and controller design for networked systems with stochastic sampling and two‐channel deception attacks. First, we give a general matrix decomposition approach which is applicable to scenarios where the system matrix A$A$ contains complex‐value eigenvalues. Then, a discrete stochastic framework is established for a class of networked systems which considers the joint effects of sampling errors and two‐channel deception attacks. Utilizing the matrix decomposition approach introduced in this study, it becomes feasible to decouple the expectation operations for specific coupling matrices characterized by substantial nonlinearity and randomness. Based on this, a stabilization controller is constructed that ensures the exponential mean‐square stability of the resulting discrete stochastic system. Finally, three simulation examples are provided to validate the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Identification method of circumferential declination based on amplitude reduction of pipeline ultrasonic internal inspection signals.
- Author
-
Cai, Liangxue, Diao, Zhengqi, Chen, Fei, Guan, Liang, and Xu, Guangli
- Subjects
- *
ECHO , *ULTRASONICS , *SAMPLING errors , *ABSOLUTE value , *SIGNAL processing , *STATISTICAL sampling - Abstract
The echo signal of ultrasonic internal inspection is highly sensitive to the incident deflection angle between the probe and the measured surface of the pipeline, which is the key factor affecting the accuracy of ultrasonic inspection. In practical engineering, the echo signal of ultrasonic inspection is easy to be interfered by the external environment, which makes it difficult to identify the deflection angle. The data of defect echo signal under different circumferential declination were collected by the pipeline ultrasonic internal inspection system, and the characteristics of signal variation in time domain under different circumferential declination were analysed. The results show that the existence of circumferential declination reduced the absolute value of the overall amplitude of ultrasonic echo signal. A signal processing method based on amplitude reduction is proposed to reduce the sampling random error. Comparative analysis shows that the circumferential declination can be identified based on amplitude reduction method, and the accuracy of ultrasonic recognition can be improved. By extracting the characteristic value of waveform, the empirical formula is obtained. Within 2°, the maximum deviation is 0.137°. The research results have certain guiding significance for the identification of circumferential declination in practical engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A spatiotemporal comparison of interobserver error in vegetation sampling.
- Author
-
Morrison, Lloyd W., Leis, Sherry A., Short, Mary F., and DeBacker, Michael D.
- Subjects
- *
SAMPLING errors , *FORESTS & forestry , *NATIONAL parks & reserves , *ERROR rates , *TREES - Abstract
Questions: We asked how interobserver error in sampling vegetation (excluding trees) varied over time, space and habitat type; determined whether there were any obvious correlates of observer error; and evaluated evidence of bias among observers. Location: Nine national park units in the Midwestern USA. Methods: We quantified observer error in the context of a long‐term monitoring program employing three observers, evaluating interobserver error across 11 locations in the Midwestern USA over five years. The vegetation (excluding trees) was sampled independently by two teams of observers at prairie and woodland locations (n = 94 plots total). Results: Total pseudoturnover ranged between 20.2% and 22.1% at prairie locations, and between 16.8% and 28.6% at woodland locations. The overlooking component of pseudoturnover accounted for 75% or more of total pseudoturnover, with misidentification and cautious components each contributing 19% or less of the total, depending on location. The percentage of comparisons in which both observers recorded the same cover class ranged from 71.3% to 78.5% at the prairie locations and 56.9% to 85.6% at woodland locations. When observers did not agree on cover class, they were off by more than one class less than 6% of the time. Overlooking error was more likely to occur for species with less cover, while estimation error was more likely to occur for species with greater cover. A bias existed among observers, as the least experienced observer recorded 6.2%–11.8% more species than the other two observers. Interobserver bias also existed for rates of estimation error, as one observer consistently recorded higher cover classes. Conclusions: Observer error is a pervasive aspect of vegetation sampling. Continued training and experience yielded limited increases in precision. Elements of the sampling design probably reduced observer error to a certain degree, although some level of interobserver error in vegetation surveys is unavoidable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Effect of streamflow measurement error on flood frequency estimation.
- Author
-
Velásquez, Nicolás and Krajewski, Witold F.
- Subjects
- *
MEASUREMENT errors , *STREAM measurements , *MONTE Carlo method , *LOGNORMAL distribution , *FLOODS , *SAMPLING errors , *WATERSHEDS - Abstract
Significant errors often arise when measuring streamflow during high flows and flood events. Such errors conflated by short records of observations may induce bias in the flood frequency estimates, leading to costly engineering design mistakes. This work illustrates how observational (measurement) errors affect the uncertainty of flood frequency estimation. The study used the Bulletin 17 C (US standard) method to estimate flood frequencies of historical peak flows modified to represent the measurement limitations. To perform the modifications, we explored, via Monte Carlo simulation, four hypothetical scenarios that mimic measurement errors, sample size limitations, and their combination. We used a multiplicative noise from a log-normal distribution to simulate the measurement errors and implemented a bootstrap approach to represent the sampling error. Then, we randomly selected M samples from the total N records of the observed peak flows of four gauging stations in Iowa in central USA. The observed data record ranges between 76 and 119 years for watersheds with drainage areas between 500 and 16,000 km2. According to the results, measurement errors lead to more significant differences than sampling limitations. The scenarios exhibited differences with median magnitudes of up to 50%, with some cases reaching differences up to 100% for return periods above 50 years. The results raise a red flag regarding flood frequency estimation that warrants looking for further research on observational errors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.