6,795 results on '"sampling errors"'
Search Results
2. Enhancing Data Utility in Personalized Differential Privacy: A Fine-Grained Processing Approach
- Author
-
Liu, Zhenhua, Wang, Wenxin, Liang, Han, Yuan, Yujing, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Chen, Xiaofeng, editor, and Huang, Xinyi, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Performance evaluation of the introduction of full sample traceability system within the specimen collection process.
- Author
-
Foglia, Emanuela, Garagiola, Elisabetta, Ferrario, Lucrezia, and Plebani, Mario
- Subjects
- *
SUSTAINABLE development , *COST effectiveness , *BLOOD collection , *TECHNOLOGICAL innovations , *SAMPLING errors - Abstract
To evaluate the efficacy, safety and efficiency performances related to the introduction of innovative traceability platforms and integrated blood collection systems, for the improvement of a total testing process, thus also assessing the economic and organizational sustainability of these innovative technologies. A mixed-method approach was utilized. A key-performance indicators dashboard was created based on a narrative literature review and expert consensus and was assessed through a real-life data collection from the University Hospital of Padova, Italy, comparing three scenarios over time (2013, 2016, 2019) with varying levels of technological integration. The economic and organizational sustainability was determined considering all the activities performed from the tube check-in to the validation of the results, with the integration of the management of the prevalent errors occurred during the process. The introduction of integrated venous blood collection and full sample traceability systems resulted in significant improvements in laboratory performance. Errors in samples collected in inappropriate tubes decreased by 42 %, mislabelled samples by 47 %, and samples with irregularities by 100 %. Economic analysis revealed a cost saving of 12.7 % per tube, equating to a total saving of 447,263.80 € over a 12-month period. Organizational efficiency improved with a reduction of 13,061.95 h in time spent on sample management, allowing for increased laboratory capacity and throughput. Results revealed the strategic relevance of introducing integrated venous blood collection and full sample traceability systems, within the Laboratory setting, with a real-life demonstration of TLA economic and organizational sustainability, generating an overall improvement of the process efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Hybrid hyperinterpolation over general regions.
- Author
-
An, Congpei, Ran, Jiashu, and Sommariva, Alvise
- Subjects
- *
REGULARIZATION parameter , *CONTINUOUS functions , *ORTHONORMAL basis , *OPERATOR functions , *NOISE , *SAMPLING errors - Abstract
We present an ℓ 2 2 + ℓ 1 -regularized discrete least squares approximation over general regions under assumptions of hyperinterpolation, named hybrid hyperinterpolation. Hybrid hyperinterpolation, using a soft thresholding operator and a filter function to shrink the Fourier coefficients approximated by a high-order quadrature rule of a given continuous function with respect to some orthonormal basis, is a combination of Lasso and filtered hyperinterpolations. Hybrid hyperinterpolation inherits features of them to deal with noisy data once the regularization parameter and the filter function are well chosen. We derive L 2 errors in theoretical analysis for hybrid hyperinterpolation to approximate continuous functions with noise data on sampling points. Numerical examples illustrate the theoretical results and show that well chosen regularization parameters can enhance the approximation quality over the unit-sphere and the union of disks. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Estimation of the mean exponential survival time under a sequential censoring scheme.
- Author
-
Hu, Jun, So, Hon Yiu, and Zhuang, Yan
- Subjects
- *
SURVIVAL rate , *DISTRIBUTION (Probability theory) , *MONTE Carlo method , *CENSORING (Statistics) , *HARD disks , *SAMPLING errors - Abstract
The exponential distribution can provide a simple and appealing survival-time model in reliability analysis and life tests. Due to certain experimental designs, time limitations, budgetary requirements, and other reasons, however, only censored data can be obtained. In this paper, we propose a novel estimation procedure for the mean survival time of an exponential distribution under a sequential censoring scheme, which can be treated as a combination of type I censoring and type II censoring. The procedure makes a trade-off between estimation error and sampling cost, using the minimum number of observations. An extensive set of Monte Carlo simulations is conducted to further validate its remarkable performance. To demonstrate the practical applicability, we then implement this newly proposed procedure to assess the reliability of various Backblaze's hard disk models. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. Two‐parametric prescan calibration of gradient‐induced sampling errors for rosette MRI.
- Author
-
Latta, Peter, Jiřík, Radovan, Vitouš, Jiří, Macíček, Ondřej, Vojtíšek, Lubomír, Rektor, Ivan, Standara, Michal, Křístek, Jan, and Starčuk, Zenon
- Subjects
SAMPLING errors ,INFORMATION measurement ,CALIBRATION ,MAGNETIC resonance imaging ,IMPERFECTION - Abstract
Purpose: The aim of this study was to develop a simple, robust, and easy‐to‐use calibration procedure for correcting misalignments in rosette MRI k‐space sampling, with the objective of producing images with minimal artifacts. Methods: Quick automatic calibration scans were proposed for the beginning of the measurement to collect information on the time course of the rosette acquisition trajectory. A two‐parameter model was devised to match the measured time‐varying readout gradient delays and approximate the actual rosette sampling trajectory. The proposed calibration approach was implemented, and performance assessment was conducted on both phantoms and human subjects. Results: The fidelity of phantom and in vivo images exhibited significant improvement compared with uncorrected rosette data. The two‐parameter calibration approach also demonstrated enhanced precision and reliability, as evidenced by quantitative T2*$$ {\mathrm{T}}_2^{\ast } $$ relaxometry analyses. Conclusion: Adequate correction of data sampling is a crucial step in rosette MRI. The presented experimental results underscore the robustness, ease of implementation, and suitability for routine experimental use of the proposed two‐parameter rosette trajectory calibration approach. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. Uncertainty quantification and confidence intervals for naive rare-event estimators.
- Author
-
Bai, Yuanlu and Lam, Henry
- Subjects
ARTIFICIAL intelligence ,SAMPLING errors ,PROBABILITY theory ,ACQUISITION of data ,A priori - Abstract
We consider the estimation of rare-event probabilities using sample proportions output by naive Monte Carlo or collected data. Unlike using variance reduction techniques, this naive estimator does not have an a priori relative efficiency guarantee. On the other hand, due to the recent surge of sophisticated rare-event problems arising in safety evaluations of intelligent systems, efficiency-guaranteed variance reduction may face implementation challenges which, coupled with the availability of computation or data collection power, motivate the use of such a naive estimator. In this paper we study the uncertainty quantification, namely the construction, coverage validity, and tightness of confidence intervals, for rare-event probabilities using only sample proportions. In addition to the known normality, Wilson, and exact intervals, we investigate and compare them with two new intervals derived from Chernoff's inequality and the Berry–Esseen theorem. Moreover, we generalize our results to the natural situation where sampling stops by reaching a target number of rare-event hits. Our findings show that the normality and Wilson intervals are not always valid, but they are close to the newly developed valid intervals in terms of half-width. In contrast, the exact interval is conservative, but safely guarantees the attainment of the nominal confidence level. Our new intervals, while being more conservative than the exact interval, provide useful insights into understanding the tightness of the considered intervals. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Community participation disparities among people with disabilities during the COVID-19 pandemic.
- Author
-
Kersey, Jessica, Lane, Rachel, Kringle, Emily A., and Hammel, Joy
- Subjects
- *
HEALTH services accessibility , *RESOURCE allocation , *RESEARCH funding , *SOCIOECONOMIC factors , *DESCRIPTIVE statistics , *SURVEYS , *RACE , *HEALTH equity , *CONFIDENCE intervals , *COVID-19 pandemic , *PEOPLE with disabilities , *SOCIAL participation , *SAMPLING errors , *SOCIAL isolation , *ACCESS to information , *EDUCATIONAL attainment - Abstract
Purpose: To describe disparities in community participation during the COVID-19 pandemic among people with disabilities. Methods: Respondents to Phase 3.3 of the COVID Household Pulse Survey (US Census Bureau) were classified by disability status. Risk ratios and risk differences were computed to compare the risk of poor outcomes on economic participation, community service use, and community activities by disability status - both overall (compared to the nondisabled reference) and by race/ethnicity (each subgroup compared to the White nondisabled reference). Results: At least one type of disability was reported by 59.6% of respondents. People with disabilities were more likely to report in-person medical appointments but were at greater risk of poor outcomes across all other outcomes [risk ratio range = 1.01(1.01–1.02) to 1.91(1.80–2.01), risk difference range = 1.0(0.5–1.5) to 13.4(12.6–14.2)]. The disabled Black and disabled Hispanic/Latino groups experienced disproportionately high risk of poor outcomes across all indicators [risk ratio range = 1.0 (1.0–1.1) to 6.1 (5.0–7.1), risk difference range = 3.2 (1.9–4.4) to 33.1 (30.1–35.4)]. Conclusions: The high number of people reporting disability, along with the notable disparities in community participation outcomes among those reporting disability, suggest the need for expanded rehabilitation services and community supports to enhance participation. IMPLICATIONS FOR REHABILITATION: People with disabilities experienced disparities in community participation outcomes during the pandemic, particularly in indicators of economic participation (paid employment, income, and education). Disabled people from racial and ethnic minority groups experienced the most severe disparities in outcomes. Stronger rehabilitation services are critical to address new disability or pandemic-related changes in the experience or severity of existing disability. Stronger community and social supports (employment supports, accessible assistive technology, and safe transportation options) may also reduce the disparities in community participation experienced by people with disabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Understanding cellular proliferation activity in breast cancer using multi-compartment model of transverse relaxation time mapping on 3T MRI.
- Author
-
Nkonde, Kangwa Alex, Cheung, Sai Man, Senn, Nicholas, and He, Jiabao
- Subjects
EXTRACELLULAR space ,NEOADJUVANT chemotherapy ,SAMPLING errors ,DUCTAL carcinoma ,CELL proliferation - Abstract
Introduction: Precise understanding of proliferative activity in breast cancer holds significant value in the monitoring of neoadjuvant treatment, while current immunostaining of Ki-67 from biopsy or resected tumour suffers from partial sampling error. Multi-compartment model of transverse relaxation time has been proposed to differentiate intra- and extra-cellular space and biochemical environment but susceptible to noise, with recent development of Bayesian algorithm suggested to improve robustness. We hence hypothesise that intra- and extra-cellular transverse relaxation times using Bayesian algorithm might be sensitive to proliferative activity. Materials and methods: Twenty whole tumour specimens freshly excised from patients with invasive ductal carcinoma were scanned on a 3 T clinical scanner. The overall transverse relaxation time was computed using a single-compartment model with the non-linear least squares algorithm, while intra- and extra-cellular transverse relaxation times were computed using a multi-compartment model with the Bayesian algorithm. Immunostaining of Ki-67 was conducted, yielding 9 and 11 cases with high and low proliferating activities respectively. Results: For single-compartment model, there was a significant higher overall transverse relaxation time (p = 0.031) in high (83.55 ± 7.38 ms) against low (73.30 ± 11.30 ms) proliferating tumours. For multi-compartment model, there was a significant higher intra-cellular transverse relaxation time (p = 0.047) in high (73.52 ± 10.92 ms) against low (61.30 ± 14.01 ms) proliferating tumours. There was no significant difference in extra-cellular transverse relaxation time (p = 0.203) between high and low proliferating tumours. Conclusions: Overall and Bayesian intra-cellular transverse relaxation times are associated with proliferative activities in breast tumours, potentially serving as a non-invasive imaging marker for neoadjuvant treatment monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Prevalence and detection of citrate contamination in clinical laboratory.
- Author
-
Lorde, Nathan, Gama, Rousseau, and Kalaria, Tejas
- Subjects
- *
ION selective electrodes , *ELECTROLYTE analysis , *HEMODIALYSIS patients , *PATHOLOGICAL laboratories , *SAMPLING errors - Abstract
To study the prevalence of trisodium citrate (Na3Citrate) contamination in hypernatraemic serum samples by direct measurement of citrate and to evaluate the performance of indirect markers for identification of Na3Citrate contamination.Serum citrate was measured in all hypernatraemic serum samples (sodium ≥148 mmol/L) over a three-month period. The performance of serum chloride, sodium-chloride gap, indirect ion selective electrode (ISE)-direct ISE sodium disparity and osmolar gap in identification of Na3Citrate contaminated samples was assessed against the ‘gold-standard’ direct citrate measurement.In total, 27 Na3Citrate contaminated samples were identified based on serum citrate concentration ≥1.5 mmol/L. The prevalence of citrate contamination was 3.1 % of hypernatraemic samples (n=875) and 0.017 % of all samples received for urea and electrolyte analysis (n=153,404). Most contaminated samples were from patients receiving haemodialysis (59.3 %), and the rest from inpatients. Cut-offs to give 100 % sensitivity were chloride ≤105 nmol/L (specificity 93.4 %), sodium-chloride gap ≥47 mmol/L (specificity 95.3 %), indirect ISE-direct ISE sodium disparity ≥3 mmol/L (specificity 81.9 %), and osmolar gap ≥39 mOsm/kg (specificity 2.8 %).Trisodium citrate contamination is uncommon. Most contaminated samples were from patients receiving haemodialysis, likely because of contamination with citrate catheter locking solution. Screening with serum chloride or sodium-chloride gap can confidently exclude Na3Citrate contamination in over 90 % of hypernatraemic samples, and in nearly all samples with sodium ≥155 mmol/L if metabolic alkalosis has been excluded. In the remaining samples, Na3Citrate contamination can only be definitively confirmed or excluded by measurement of serum citrate. We propose algorithms to identify spurious hypernatraemia. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Negative Pituitary MRI Findings in Cushing's Disease Do Not Lead to Inferior Rates of Long-Term Remission Following Transsphenoidal Surgery—A Single-Center Experience.
- Author
-
Burns, William, Kholi, Gurkirat, Mungara, Tharan, Contento, Nicolas, Romiyo, Prasanth, Singh, Rohin, Shafiq, Ismat, and Vates, Edward
- Subjects
- *
CUSHING'S syndrome , *CAVERNOUS sinus , *REOPERATION , *SAMPLING errors , *DISEASE remission - Abstract
The article discusses the outcomes of transsphenoidal surgery (TSS) in patients with Cushing's disease (CD) who have negative MRI findings compared to those with positive MRI findings. The study found that while short-term remission rates were lower in patients with negative MRI findings, long-term remission rates and persistent disease rates were similar between the two groups. Factors predicting long-term remission included pre- and postoperative cortisol levels, extent of resection, and cavernous sinus invasion, rather than histological confirmation or positive MRI findings. The study suggests that surgery can induce long-term remission even in the absence of histological confirmation or positive MRI findings, possibly due to postoperative ischemic necrosis of the residual adenoma. [Extracted from the article]
- Published
- 2025
- Full Text
- View/download PDF
12. Double M-Plasty Skin Repair (Combined Linear and Advancement Flaps) for Excision of Pigmented Skin Lesion (PSL) Suspicious of Malignant Melanoma (MM)—A Minimally Invasive Surgery MIS Approach with Maximal Tissue Conservation for a Potentially Benign Lesion
- Author
-
Chen, Patrick
- Subjects
- *
MINIMALLY invasive procedures , *EUCLIDEAN geometry , *SURGICAL margin , *PLANE geometry , *SAMPLING errors , *WOUND healing - Abstract
The article discusses the use of a Double M-Plasty skin repair technique for excision of pigmented skin lesions suspected of being malignant melanoma. This minimally invasive surgery approach aims to conserve tissue while ensuring accurate histopathological diagnosis. The technique offers advantages such as reducing tension, minimizing scarring, and improving cosmetic outcomes, making it a viable option for lesions on critical facial organs. The Double M-Plasty technique is particularly beneficial for potentially benign lesions, as it allows for precise lesion site focus and tissue conservation. [Extracted from the article]
- Published
- 2025
- Full Text
- View/download PDF
13. Linear and Nonlinear Indices of Score Accuracy and Item Effectiveness for Measures That Contain Locally Dependent Items.
- Author
-
Ferrando, Pere J., Navarro-González, David, and Morales-Vives, Fabia
- Subjects
- *
CHAOS theory , *STATISTICAL models , *RESEARCH methodology evaluation , *FACTOR analysis , *PREDICTIVE validity , *RELIABILITY (Personality trait) , *SAMPLING errors - Abstract
The problem of local item dependencies (LIDs) is very common in personality and attitude measures, particularly in those that measure narrow-bandwidth dimensions. At the structural level, these dependencies can be modeled by using extended factor analytic (FA) solutions that include correlated residuals. However, the effects that LIDs have on the scores based on these extended solutions have received little attention so far. Here, we propose an approach to simple sum scores, designed to assess the impact of LIDs on the accuracy and effectiveness of the scores derived from extended FA solutions with correlated residuals. The proposal is structured at three levels—(a) total score, (b) bivariate-doublet, and (c) item-by-item deletion—and considers two types of FA models: the standard linear model and the nonlinear model for ordered-categorical item responses. The current proposal is implemented in SINRELEF.LD, an R package available through CRAN. The usefulness of the proposal for item analysis is illustrated with the data of 928 participants who completed the Family Involvement Questionnaire-High School Version (FIQ-HS). The results show not only the distortion that the doublets cause in the omega reliability estimate when local independency is assumed but also the loss of information/efficiency due to the local dependencies. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
14. A Multi-Scale Feature Focus and Dynamic Sampling-Based Model for Hemerocallis fulva Leaf Disease Detection.
- Author
-
Wang, Tao, Xia, Hongyi, Xie, Jiao, Li, Jianjun, and Liu, Junwan
- Subjects
URBAN ecology ,LANDSCAPE design ,FEATURE extraction ,DAYLILIES ,SAMPLING (Process) ,SAMPLING errors - Abstract
Hemerocallis fulva, essential to urban ecosystems and landscape design, faces challenges in disease detection due to limited data and reduced accuracy in complex backgrounds. To address these issues, the Hemerocallis fulva leaf disease dataset (HFLD-Dataset) is introduced, alongside the Hemerocallis fulva Multi-Scale and Enhanced Network (HF-MSENet), an efficient model designed to improve multi-scale disease detection accuracy and reduce misdetections. The Channel–Spatial Multi-Scale Module (CSMSM) enhances the localization and capture of critical features, overcoming limitations in multi-scale feature extraction caused by inadequate attention to disease characteristics. The C3_EMSCP module improves multi-scale feature fusion by combining multi-scale convolutional kernels and group convolution, increasing fusion adaptability and interaction across scales. To address interpolation errors and boundary blurring in upsampling, the DySample module adapts sampling positions using a dynamic offset learning mechanism. This, combined with pixel reordering and grid sampling techniques, reduces interpolation errors and preserves edge details. Experimental results show that HF-MSENet achieves mAP@50 and mAP%50–95 scores of 94.9% and 80.3%, respectively, outperforming the baseline model by 1.8% and 6.5%. Compared to other models, HF-MSENet demonstrates significant advantages in efficiency and robustness, offering reliable support for precise disease detection in Hemerocallis fulva. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
15. Carleman linearization of nonlinear systems and its finite-section approximations.
- Author
-
Amini, Arash, Zheng, Cong, Sun, Qiyu, and Motee, Nader
- Subjects
LINEAR dynamical systems ,LINEAR systems ,NONLINEAR systems ,TIME perspective ,SAMPLING errors ,PREDICTION models ,NONLINEAR dynamical systems - Abstract
The Carleman linearization is one of the mainstream approaches to lift a finite-dimensional nonlinear dynamical system into an infinite-dimensional linear system with the promise of providing accurate approximation of the original nonlinear system over larger regions around the equilibrium for longer time horizons with respect to the conventional first-order linearization approach. Finite-section approximations of the lifted system has been widely used to study dynamical and control properties of the original nonlinear system. In this context, some of the outstanding problems are to determine under what conditions, as the finite-section order (i.e., truncation length) increases, the trajectory of the resulting approximate linear system from the finite-section scheme converges to that of the original nonlinear system and whether the time interval over which the convergence happens can be quantified explicitly. In this paper, we provide explicit error bounds for the finite-section approximation and prove that the convergence is indeed exponential with respect to the finite-section order. For a class of stable nonlinear dynamical systems, it is shown that one can achieve exponential convergence over the entire time horizon up to infinity. Our results are practically plausible as our proposed error bound estimates can be used to compute proper truncation lengths for a given application, e.g., determining proper sampling period for model predictive control and reachability analysis for safety verifications. We validate our theoretical findings through several illustrative simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
16. Review and characterization of the 2014 Orkney damage datasets for damage pattern mapping and fragility curve construction.
- Author
-
Nqasha, Thando, Akombelwa, Mulemwa, Singh, Mayshree, and Kijko, Andrzej
- Subjects
EARTHQUAKE damage ,SAMPLING errors ,TEXTURE mapping ,EARTHQUAKES ,REGRESSION analysis - Abstract
The 2014 Orkney earthquake caused significant damage to unreinforced masonry buildings in the surrounding townships. After the earthquake, field surveys were conducted to assess the extent of damage in the affected areas. This study reviews data collected from the 2014 Orkney earthquake to investigate damage patterns, evaluate building safety for occupancy, and support fragility curve construction. Damage was quantified based on the European Macroseismic Scale (EMS-98) to assess building safety and conduct regression analysis. The results indicate that the collected data is suitable for investigating damage patterns and determining building safety for occupancy. However, it is not suitable for constructing fragility curves. Empirical fragility curves are typically developed using logistic regression, but this study found the data unsuitable for regression analysis due to sampling errors and limited data quantity. This study recommends the use of first-order approximation methods to supplement the dataset, reducing sampling errors and increasing data quantity. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
17. Inference for a dependent competing risks model on Marshall-Olkin bivariate Lomax-geometric distribution.
- Author
-
Ye, Tianrui, Zhang, Chunmei, and Gui, Wenhao
- Subjects
- *
STATISTICAL reliability , *EXPECTATION-maximization algorithms , *MAXIMUM likelihood statistics , *BAYESIAN analysis , *INFERENTIAL statistics , *SAMPLING errors - Abstract
Abstract.The study of competing risk models is of great significance to survival and reliability analysis in statistics and it is more reasonable that they are assumed to have dependent failure causes in the actual situation. Therefore, statistical inference of five-parameter Marshall–Olkin bivariate Lomax-geometric distribution is considered in combination with dependent failure causes in this article. From the perspective of classical frequency, the estimates of unknown parameters are derived by the EM algorithm and the existence and uniqueness of solutions are also proved. In the Bayesian framework, a rather flexible class of prior distributions and the importance sampling technique are considered to obtain the estimates of all parameters for two types of data under the squared error loss function, and the credible intervals are also constructed. Finally, some simulation results and real data analysis are provided to show the effectiveness of the constructed model and the performance of various methods. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
18. The Mediating Effect of Time Management Skills on the Relationship Between Homework Load and Mathematics Achievement among BSEd Mathematics College Students.
- Author
-
Sequiña, Lea Joy S. and Generalao, Regine L.
- Subjects
MATHEMATICS students ,TIME management ,MATHEMATICS education ,SAMPLING errors ,EDUCATION students - Abstract
The purpose of the study was to determine the mediating effect of time management skills on the relationship between homework load and mathematics achievement among mathematics college students. The study is quantitative non-experimental research that utilizes descriptive-correlational, and mediation analyses. Using stratified random sampling specifically proportional allocation for the sampling techniques and Slovin’s formula with 0.05 margin of error for the sample size, a sample of 150 randomly selected mathematics education students answered the surveys on the three variables. Results showed that the level of homework load and mathematics achievement where in all high level. Moreover, the level of time management skills is also at a high level. Results also revealed that the relationship between homework load and mathematics achievement, homework load and time management skills, and time management skills and mathematics achievement among first to fourth-year mathematics students were significant. Moreover, the results showed that time management skills fully mediated the relationship between homework load and mathematics achievement. This means that giving students more homework in math can initially help improve their basic understanding. However, this benefit may fade if the homework time is not well-managed. Teachers should also think about the amount of homework they give students, as well as their thinking strategies and how they manage resources. These factors can influence how well teaching quality relates to satisfaction with math. In summary, this study shows that well-organized schedule can help improve the amount of homework done and the success in math. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
19. The Mediating Effect of Self-Regulation on the Relationship between Mathematical Disposition and Mathematics Proficiency among Mathematics Education Students.
- Author
-
Tuba, Klerth S. and Espinosa, Deveyvon L.
- Subjects
MATHEMATICS education ,EDUCATIONAL planning ,MATHEMATICS students ,EDUCATION students ,SAMPLING errors - Abstract
The purpose of this study was to determine whether self-regulation mediates the relationship between mathematical disposition and mathematics proficiency among first-year mathematics major students. The study was quantitative non-experimental research that utilizes descriptive-correlational, and mediation analysis. Using stratified random sampling, specifically proportional allocation, for the sampling techniques and Slovin's formula with 0.05 margin of error for the sample size, a sample of 92 randomly selected mathematics education students answered the surveys on the three variables. Results showed that the level of mathematical disposition and the mediating variable, selfregulation, are all at high level; the level of mathematics proficiency, which was measured through a 25-item multiplechoice questionnaire, is at a very good or satisfactory level. Results also revealed that the relationship between mathematical disposition and mathematics proficiency, between mathematical disposition and self-regulation, and between self-regulation and mathematics proficiency among first-year mathematics education students are all significant. However, results showed that self-regulation did not mediate the relationship between mathematical disposition and mathematics proficiency as the indirect effect has a p-value greater than 0.5 or null accepted. It implied that mathematical disposition directly influences mathematics proficiency without the intervention of self-regulation. This finding underscored the complexity of factors influencing mathematical proficiency and suggests avenues for further research to better understand the mechanisms at play. Understanding these relationships can inform interventions and educational strategies aimed at improving mathematics proficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
20. Characterizing the intrinsic and the one-dimensional heterogeneities of a niobium ore based on Pierre Gy's Theory of Sampling.
- Author
-
Carolina Chieregati, Ana, Vaz Dias, Rafael, and Yuntang Lan
- Subjects
- *
SAMPLING (Process) , *CONVEYOR belts , *SAMPLING errors , *BELT conveyors , *NIOBIUM - Abstract
The Fundamental Sampling Error (FSE) is the main error defined by Pierre Gy's Theory of Sampling (Gy, 1967; 1979; 1992) and is related to the constitution or intrinsic heterogeneity (IH) of the ore. Even if a sampling procedure is considered ideal, this error can never be eliminated. To calculate the FSE for a certain sample taken from a certain fragmented lot, crushed to a certain size, the intrinsic heterogeneity of the lot (IHL) must be estimated, which can be done theoretically applying the Gy's factors, or experimentally performing heterogeneity tests. FSE calculation allows the optimization of sampling protocols, the calculation of minimum sample masses, as well as the estimation of the precision of a sampling procedure or equipment. FSE represents the zero-dimensional heterogeneity of a lot and it is of upmost importance to calculate it. However, there is another type of heterogeneity related to one-dimensional lots, i.e., the material flow on conveyor belts or in pipelines. This one-dimensional heterogeneity can be characterized with variography, by estimating the Heterogeneity Fluctuation Error (HFE). Obtaining reliable information on ore grades at the plant feed is a great challenge for mining operations. When the precision of the plant feed grade is low, incorrect decisions can be made and may decrease the process yield. In order to estimate both FSE and HFE for a Brazilian niobium ore, a sampling campaign was carried out at the plant feed. Results indicated that the 5-minute sampling interval was appropriate, resulting in a low relative standard deviation of HFE, i.e., 2.26% for Nb2O5, considering a 95% confidence interval. This article shows how to estimate the zero- and one-dimensional heterogeneities of ores and how important it is to define the precision associated with the grade estimates for process control, metallurgical accounting and reconciliation purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
21. 基于核主成分分析的半监督日志异常检测模型.
- Author
-
顾兆军, 叶经纬, 刘春波, 张智凯, and 王志
- Subjects
- *
PRINCIPAL components analysis , *K-means clustering , *DATA logging , *SAMPLING errors , *PROBLEM solving , *SUPERVISED learning - Abstract
For the system log data with the distribution characteristics of "group anomaly" and "local anomaly", traditional semi-supervised log anomaly detection method of anomaly detection with partially observed anomalies (ADOA) has poor accuracy of pseudo-labels generated for unlabeled data. To solve the problem, the improved semi-supervised log anomaly detection model was proposed. The known abnormal samples were clustered by k-means, and the reconstruction errors of unlabeled samples were calculated by kernel principal component analysis. The comprehensive anomaly score of sample was calculated from reconstruction error and similarity to abnormal samples, which was used as pseudo-label. Sample weights for the LightGBM classifier were calculated based on pseudo-labels to train the anomaly detection model. The impact of the proportion of training set samples on model performance was explored through parameter experiments. The experiments were conducted on two public datasets of HDFS and BGL. The results show that the proposed model can improve the pseudo-label accuracy. Compared to existing models of DeepLog, LogAnomaly, LogCluster, PCA and PLELog, the precision and F1 score are improved. Compared to traditional ADOA anomaly detection methods, F1 scores are increased by 8.4% and 8.5% on the two datasets, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
22. A Class of Ratio Cum Product Estimators with Non-Response and Measurement Error Using ORRT Models: A Double Sampling Scheme.
- Author
-
KUMAR, SUNIL, KOUR, SANAM PREET, and SINGH, HOUSILA P.
- Subjects
- *
MEASUREMENT errors , *NONRESPONSE (Statistics) , *DEMOGRAPHIC surveys , *RANDOMIZED response , *SAMPLING errors - Abstract
The current study employs the ratio-cum-product estimator to estimate the population mean of a sensitive study variable, aiming to overcome issues related to non-response and measurement error in the context of double sampling. The characteristics of the proposed class of estimators are computed up to the first order of approximation. A comparative analysis is conducted to assess the performance of the suggested estimator amongst the class of estimator and the estimator proposed by Kumar & Zhang (2023). Additionally, theoretical findings are supported by conducting two model based simulation study by using an hypothetical population. The simulation results demonstrates that the proposed ratio-cum-product estimator under double sampling exhibits the lowest mean squared error among Kumar & Zhang (2023) estimator and all classes of suggested estimator which indicates their superior performance in estimating the population mean of a sensitive variable. As a result, the suggested estimator offers a valuable tool for estimating the population mean in surveys conducted across various agriculture, environmental studies, market research and health surveys. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
23. A Two-Stage Classification for Dealing with Unseen Clusters in the Testing Data.
- Author
-
JUNG WUN LEE and HAREL, OFER
- Subjects
- *
OUTLIER detection , *SAMPLING errors , *DATA science , *CLUSTER analysis (Statistics) , *OPEN clusters of stars , *CLUSTER sampling - Abstract
Classification is an important statistical tool that has increased its importance since the emergence of the data science revolution. However, a training data set that does not capture all underlying population subgroups (or clusters) will result in biased estimates or misclassification. In this paper, we introduce a statistical and computational solution to a possible bias in classifi- cation when implemented on estimated population clusters. An unseen-cluster problem denotes the case in which the training data does not contain all underlying clusters in the population. Such a scenario may occur due to various reasons, such as sampling errors, selection bias, or emerging and disappearing population clusters. Once an unseen-cluster problem occurs, a testing observation will be misclassified because a classification rule based on the sample cannot capture a cluster not observed in the training data (sample). To overcome such issues, we suggest a two-stage classification method to ameliorate the unseen-cluster problem in classification. We suggest a test to identify the unseen-cluster problem and demonstrate the performance of the two-stage tailored classifier using simulations and a public data example. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
24. First Experiences with Fusion of PET-CT and MRI Datasets for Navigation-Assisted Percutaneous Biopsies for Primary and Metastatic Bone Tumors.
- Author
-
Fritzsche, Hagen, Pape, Alexander, Schaser, Klaus-Dieter, Beyer, Franziska, Plodeck, Verena, Hoffmann, Ralf-Thorsten, Hahlbohm, Patricia, Mehnert, Elisabeth, and Weidlich, Anne
- Subjects
- *
THREE-dimensional imaging , *MAGNETIC resonance imaging , *POSITRON emission tomography computed tomography , *SAMPLING errors , *METASTASIS - Abstract
Background: The aim of this study was to compare the technique of navigation-assisted biopsy based on fused PET and MRI datasets to CT-guided biopsies in terms of the duration of the procedure, radiation dose, complication rate, and accuracy of the biopsy, particularly in anatomically complex regions. Methods: Between 2019 and 2022, retrospectively collected data included all navigated biopsies and CT-guided biopsies of suspected primary bone tumors or solitary metastases. Navigation was based on preoperative CT, PET-CT/-MRI, and MRI datasets, and tumor biopsies were performed using intraoperative 3D imaging combined with a navigation system. Results: A total of 22 navigated (main group: m/f = 10/12, mean age: 56 yrs.) and 57 CT-guided biopsies (reference group: m/f = 36/21, mean age: 63 yrs.) were performed. Patients were grouped according to anatomic sites (pelvis, spine, extremities, thorax). The duration of the procedure in the reference group was significantly shorter than in the main group, particularly in the spine. The effective radiation dose was in the same range in both groups (main/reference group: 0.579 mSv and 0.687 mSv, respectively). In the reference group, a re-biopsy had to be performed in nine patients (diagnostic yield: 84%). A total of four major and three minor complications occurred in the reference group. Conclusions: Navigation-assisted percutaneous tumor biopsy resulted in correct, histologically useable diagnoses in all patients and reached a higher accuracy and first-time success rate (diagnostic yield: 100%) in comparison to CT-guided biopsies. The fusion of PET, CT, and MRI datasets enables us to combine anatomical with metabolic information. Consequently, target selection was improved, and the rate of false negative/low-grade sampling errors was decreased. Radiation exposure could be kept at a comparable level, and the durations of both procedures were comparable to conventional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
25. A systematic review and meta-analysis of cow-level factors affecting milk urea nitrogen and urinary nitrogen output under pasture-based diets.
- Author
-
Mangwe, Mancoba C., Mason, Winston A., Reed, Charlotte B., Spaans, Olivia K., Pacheco, David, and Bryant, Racheal H.
- Subjects
- *
METABOLIZABLE energy values , *RANDOM effects model , *DAIRY farms , *DAIRY cattle , *SAMPLING errors , *FIXED effects model , *DAIRY farm management - Abstract
The list of standard abbreviations for JDS is available at adsa.org/jds-abbreviations-24. Nonstandard abbreviations are available in the Notes. With dairy cattle farming under pressure to lower its environmental footprint, it is important to find effective on-farm proxies for evaluation and monitoring of management practices aimed at reducing the risk of nitrogen (N) losses and optimizing N use efficiency of dairy farm systems. Urinary N (UN) is regarded as the most potent source of N emissions. In contrast to confinement systems, there have been few studies from pasture-based systems associating on-farm animal and nutritional factors with UN output. Thus, the aims of this meta-analysis were to collate a database from pasture-based research in order to (1) investigate the associations of management, dietary, and animal variables with MUN concentration and daily UN output; (2) describe the MUN-UN association; and (3) assess whether animal, management, and dietary factors influence the relationship. We developed a dataset consisting of 95 observations representing 919 lactating dairy cattle fed pasture-based diets, which was compiled from 32 unique research publications that reported both MUN and UN output. Multilevel, mixed meta-analysis regression techniques were used to analyze the data. Initially, all variables were assessed as the sole fixed effect in a 2-level random effects model, accounting for within-publication heterogeneity. Meta-regression techniques were then used to assess the relationship of all variables with MUN and UN output, respectively, accounting for 3 sources of variability: the sampling error of the individual observation, within-publication heterogeneity, and among-publication heterogeneity. At the univariable level, despite more than 10 dietary, animal, or management variables being significantly associated with MUN, none explained a large amount of the MUN variation. The variables that explained the greatest amount of variation were dietary CP content and the ratio of nitrogen to ME content, which explained about 33% and 31% of the variation in MUN concentrations, respectively. Combining factors in multiple regressions improved the model fit, such that the variation within publications explained by dietary CP and N intake increased to 40.0% in the final multiple meta-regression model. For UN output, individual variables explained a greater proportion of variance reported among observations, compared with MUN, whereby diet CP content (pseudo R2 = 66.1%), N-to-ME intake ratio (pseudo R2 = 64.0%), N intake (pseudo R2 = 58.3%), and MUN (pseudo R2 = 43.5%) explained the greatest amount of the total variation. Milk urea nitrogen, N intake, and DMI were associated with UN output in the final multiple meta-regression model. Substantial heterogeneity existed in both MUN and UN among publications, with among-publication heterogeneity accounting for 73.4% of all the variation noted in MUN, and 88.6% of all the variation in UN output. As such, the meta-analyses could not predict MUN and UN to any great extent. It is recommended that a consistent approach to measuring and reporting MUN concentrations and UN output be carried out for all future research in pasture-based systems. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
26. A Decision Tree Mechanism for Microfluidic Sample Preparation for Digital Biochips.
- Author
-
Saha, Basudev and Majumder, Mukta
- Subjects
- *
GLUCOSE analysis , *TECHNOLOGICAL innovations , *DECISION trees , *PROTEIN analysis , *SAMPLING errors - Abstract
Microfluidic technology has achieved a rapid technological advancement and become well admired in miniaturized laboratory works. Effectiveness of laboratory works such as protein and glucose analysis, and pharmaceutical study depends on convenient sample preparation. These processes involve dilution of the primary reactant with buffer fluid in an appropriate proportion to ensure error-free bioassay operations in the future. In this paper, a decision tree-based method is proposed to construct the mixing tree for exploring all possible combinations of concentration values to generate the target sample for different biochemical experiments. This work is also extended for multi-target sample preparation concurrently to reduce the sample preparation time and cost. The simulated result shows that the proposed technique not only reduces the mixing/dilution operations for single-target sample preparation but also minimizes the use of primary reactant compared to the contemporary approaches and an enhanced outcome is also achieved for multi-target sample production. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Decentralized Periodic Event‐Triggered Control for Nonlinear Switched Systems.
- Author
-
Fu, Anqi, Dai, Yongji, and Qiao, Junfei
- Subjects
- *
WATER distribution , *NONLINEAR systems , *SAMPLING errors , *MODEL theory , *DETECTORS - Abstract
ABSTRACT Decentralized periodic event‐triggered control (DPETC) is a method that allows sensors to be placed flexibly while cutting down on their workload and the amount of data they send in a feedback loop, which makes it a good fit for wireless network control systems. In this article, we investigate the control issues of nonlinear switched systems whose switches are determined by the system input and consider their stability by designing a DPETC scheme. Additionally, we model and control a water distribution system with looped pipe network using a switched model and the proposed theory. In this DPETC, at each periodic sampling time, the local event‐triggered mechanism determines events by assessing whether the local sampling error meets the predefined triggering condition. If so, the collected sampling data will be transmitted to a central controller. This controller processes the data to compute the control input, with those nodes without events using the previous updated sampling data. Based on the derived control input, the controller then switches to the appropriate mode following the system's switches. The feasibility of the proposed method is demonstrated through three examples, including a smart water distribution system with looped pipeline. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. The Mediating Effect of the Learning Environment on the Relationship Between Adaptive Reasoning and Mathematical Engagement among First-Year Students.
- Author
-
Cruz, Noel D. and Gementiza, Linagyn A.
- Subjects
CLASSROOM environment ,SAMPLING errors ,STATISTICAL sampling ,SAMPLING (Process) ,SAMPLE size (Statistics) - Abstract
The study aimed to determine the learning environment's mediating effect on the relationship between adaptive reasoning and mathematical engagement among first-year students. The study was quantitative, non-experimental research that utilized descriptive-correlational and mediation analyses. Using stratified random sampling, specifically proportional allocation for the sampling techniques, and Slovin's formula with a 0.05 margin of error for the sample size, 258 randomly selected students across all programs were the respondents. Results showed high levels of adaptive reasoning, mathematical engagement, and learning environment. Results also revealed significant relationships between adaptive reasoning and mathematical engagement, between adaptive reasoning and learning environment, and between learning environment and mathematical engagement. Moreover, the results showed that the learning environment partially mediated the relationship between adaptive reasoning and mathematical engagement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Class of Calibrated Estimators of Population Proportion Under Diagonal Systematic Sampling Scheme.
- Author
-
Audu, Ahmed, Aphane, Maggie, Ahmad, Jabir, and Singh, Ran Vijay Kumar
- Subjects
- *
STATISTICAL sampling , *BINOMIAL distribution , *CALIBRATION , *PROBABILITY theory , *PERCENTILES , *SAMPLING errors - Abstract
Estimators of population characteristics which only exploit information of the study characters tend to be prone to outliers or extreme values that may characterize sampling information due to randomness in selection thereby making them to be less efficient and robust. One of the approaches often adopted in sampling surveys to address the aforementioned issue is to incorporate supplementary character information into the estimators through a calibration approach. Therefore, this study introduced two novel methods for estimating population proportion using diagonal systematic sampling with the help of an auxiliary variable. We developed two new calibration schemes and analyzed the theoretical properties (biases and mean squared errors) of the estimators up to the second-degree approximation. The theoretical findings were supported by simulation studies on five populations generated using the binomial distribution with various success probabilities. Biases, mean square errors (MSE) and the percentage relative efficiency (PRE) were computed, and the results revealed that the proposed estimators have the least biases, the least MSEs and higher PREs, indicating the superiority of the proposed estimators over the existing conventional estimator. The simulation results showed that our proposed estimators under the proposed calibration schemes performed more efficiently on average compared to the traditional unbiased estimator proposed for population proportion under diagonal systematic sampling. The superiority of the results of the proposed method over the conventional method in terms of bias, efficiency, efficiency gain, robustness and stability imply that the calibration approach developed in the study proved to be effective. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. 基于ILC和超螺旋滑模控制的永磁伺服系统扰动抑制研究.
- Author
-
杨羽萌, 朱其新, 张拥军, 眭立洪, and 朱永红
- Subjects
PERMANENT magnet motors ,SAMPLING errors ,ELECTRIC torque motors ,SLIDING mode control ,ITERATIVE learning control ,PROBLEM solving - Abstract
Copyright of Machine Tool & Hydraulics is the property of Guangzhou Mechanical Engineering Research Institute (GMERI) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
31. Improvement of EME accuracy based on an equivalent voltage acquisition method and corresponding voltage compensation strategy.
- Author
-
Wang, Mingyu, Yang, Beining, Qin, Yaru, and Dong, Guanglin
- Subjects
ALTERNATING current electric motors ,MOTOR drives (Electric motors) ,SAMPLING errors ,MOTOR unit ,ELECTRIC machines - Abstract
It is a known cost‐effective method to employ a DSP‐based electric machine emulator (EME) for motor control unit testing, this paper introduces a novel approach to enhance the emulator's accuracy. The inaccuracies caused by terminal voltage sampling errors in the motor control unit and voltage distortions in the inverter are addressed in this paper. An advanced equivalent voltage acquisition technique, which samples the duty cycle‐amplitude of the inverter's terminal voltage, is developed. Leveraging the acquired equivalent voltage data, a novel voltage compensation strategy that providing more accuracy in EME performance is proposed. The mathematical foundation of the EME is established using Kirchhoff's voltage and current laws. These findings are independently validated through simulations and experiments. The results provide robust evidence that the proposed equivalent voltage acquisition and compensation strategies can enhance the accuracy of the EME. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Clustering Functional Data With Measurement Errors: A Simulation‐Based Approach.
- Author
-
Zhu, Tingyu, Xue, Lan, Tekwe, Carmen, Diaz, Keith, Benden, Mark, and Zoh, Roger
- Subjects
- *
MEASUREMENT errors , *DATA structures , *SAMPLING errors , *CHILDHOOD obesity , *FUNCTIONAL analysis - Abstract
Clustering analysis of functional data, which comprises observations that evolve continuously over time or space, has gained increasing attention across various scientific disciplines. Practical applications often involve functional data that are contaminated with measurement errors arising from imprecise instruments, sampling errors, or other sources. These errors can significantly distort the inherent data structure, resulting in erroneous clustering outcomes. In this article, we propose a simulation‐based approach designed to mitigate the impact of measurement errors. Our proposed method estimates the distribution of functional measurement errors through repeated measurements. Subsequently, the clustering algorithm is applied to simulated data generated from the conditional distribution of the unobserved true functional data given the observed contaminated functional data, accounting for the adjustments made to rectify measurement errors. We illustrate through simulations show that the proposed method has improved numerical performance than the naive methods that neglect such errors. Our proposed method was applied to a childhood obesity study, giving more reliable clustering results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Molecular identification of whole squids and calamari at fairs and markets in regions of Latin America.
- Author
-
Paiva, Bianca Lima, Rodrigues, Alan Erik Souza, Almeida, Igor Oliveira de Freitas, Silva, Kamila de Fatima, Haimovici, Manuel, Markaida, Unai, Charvet, Patricia, Faria, Vicente Vieira, Batista, Bruno B., Tomás, Acácio Ribeiro Gomes, Rodrigues-Filho, Luis Fernando da Silva, Ready, Jonathan Stuart, and Sales, João Bráullio de Luna
- Subjects
- *
GENETIC databases , *SAMPLING errors , *SQUIDS , *CEPHALOPODA , *RECOMBINANT DNA - Abstract
The commercial importance of cephalopods has increased considerably, being an important fishing resource. However, during the preparation for commercialization of those species, they suffer the process known as "finning" which includes removing and separating the head, arm, skin or even having the body structure cut into rings, which ends up making it difficult or often prevents the identification of the species, which can lead to replacements. In this sense, the present study aimed to use the large ribosomal region, rrnL (16S rDNA) to genetically identify cephalopod species sold in markets and fairs in Latin America. Whole and processed samples were collected from supermarkets and directly from local fishers the approximate collection location. Each generated sequence was submitted to the blastn portal for molecular comparison and included in the database for subsequent genetic identification. Our results indicate labeling errors in samples from the State of Pará that contained the species Dosidicus gigas found only in the Pacific Ocean and were generically labeled as "National Lula". No type of substitution was found among the samples that were being sold at fairs and markets, only labeling errors. Thus, our results demonstrate the effectiveness of the rrnL for identifying species and evaluating labeling errors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Residential flood risk in Metro Vancouver due to climate change using probability boxes.
- Author
-
Viseh, Hiva and Bristow, David N.
- Subjects
- *
MONTE Carlo method , *FLOOD risk , *EPISTEMIC uncertainty , *SAMPLING errors , *CLIMATE change , *FLOOD damage - Abstract
To enhance the decision-making process and reduce economic losses caused by future flooding, it is critical to quantify uncertainty in each step of flood risk analysis. To address the often-missing uncertainty quantification, we propose a new methodology that combines damage functions and probability bounds analysis. We estimate the likely direct tangible damage to 375,973 residential buildings along the Fraser River through Metro Vancouver, Canada, for a range of climate change driven flood scenarios, while transparently representing the associated uncertainties caused by sampling error, imprecise measurement, and uncertainty in building height and basement conditions. Furthermore, for the purposes of developing effective management strategies, uncertainties in this study are classified into two categories, namely aleatory and epistemic. According to our findings and absent significant action, we should expect an enormous increase in flood damage to the four categories of residential buildings considered in this study by the year 2100. Moreover, the results show that second-order Monte Carlo simulation cannot adequately represent epistemic uncertainty for small sample sizes. In such a case, we recommend employing a probability box to delineate the epistemic uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Modeling Survey Time Series Data with Flow-Observed CARMA Processes.
- Author
-
Joyce, Patrick M. and McElroy, Tucker S.
- Subjects
- *
AMERICAN Community Survey , *MOVING average process , *TIME series analysis , *INTERPOLATION , *CONSUMERS , *SAMPLING errors - Abstract
Published survey data often are delivered as estimates computed over an epoch of time. Customers may desire to obtain survey estimates corresponding to epochs, or time points, that differ from the published estimates. This "change of support" problem can be addressed through the use of continuous-time models of the underlying population process, while taking into account the sampling error that survey data is subject to. The application of a Continuous AutoRegressive Moving Average (CARMA) model is investigated as a tool to provide change of support applications, thereby allowing interpolation for published survey estimates. A simulation study provides comparisons of competing estimation methods, and a synthetically constructed data set is developed in order to elucidate real data applications. The proposed method can be successful for change of support problems, despite modeling challenges with the CARMA framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Maden Kaynak Belirleme Sondaj Programlarının Optimizasyonu.
- Author
-
ÖZKAN, Yusuf Ziya
- Subjects
- *
PARTICLE swarm optimization , *DISTRIBUTION (Probability theory) , *SAMPLING errors , *ORE deposits , *GEOMETRIC approach , *KRIGING - Abstract
Mineral resource definition drilling programs are a sampling process used to determine the boundaries and characteristics of mineral deposits. There are two types of sampling errors in this process: systematic and random. Systematic errors arise from issues such as equipment calibration errors or insufficient representation of the deposit by the samples, which negatively affect the accuracy of predictions. Random errors, on the other hand, result from the random distribution of samples or natural variability, leading to uncertainty in predictions. The impact of these errors can be reduced through increased sample size, but since drilling is an expensive operation, it is essential to strike a balance between cost and achieving an acceptable level of certainty. The optimization of resource definition drilling focuses on determining the placement of drilling points or spacing between drill holes in a way that maintains this balance. The geometric approach aims to optimize by reducing the overlap of drilling effects, while geostatistical methods seek to reduce uncertainty to acceptable levels using metrics like kriging variance. The value of information approach aims to maximize the economic benefit derived from the obtained data, while minimizing misclassification costs focuses on preventing economic losses. Metaheuristic methods such as genetic algorithms and particle swarm optimization are also effective in managing uncertainty, but their application is limited due to the high computational power required. These optimization methods contribute significantly to reducing costs and improving the accuracy of the resource model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Scale-Dependent Inflation Algorithms for Ensemble Kalman Filters.
- Author
-
Deng, Junjie, Lei, Lili, Tan, Zhe-Min, and Zhang, Yi
- Subjects
- *
DATA assimilation , *SAMPLING errors , *SQUARE root , *LEAD time (Supply chain management) , *PRICE inflation , *KALMAN filtering - Abstract
Ensemble-based data assimilation methods often suffer sampling errors due to limited ensemble sizes and model errors, which can result in filter divergence. One method to avoid filter divergence is inflation, which inflates ensemble perturbations to increase ensemble spread and account for model errors. The commonly applied inflation methods, including the multiplicative inflation, relaxation to prior spread (RTPS), and additive inflation, often use a constant inflation parameter. To capture different error growths at different scales, a scale-dependent inflation is proposed here, which applies different inflation magnitudes for variables associated with different scales. Results from the two-scale Lorenz05 model III show that for the ensemble square root filter (EnSRF) and integrated hybrid ensemble Kalman filter with ensemble mean updated by hybrid background error covariances (IHEnKF-Mean), scale-dependent inflation is superior to constant inflation. Constant inflation overinflates small-scale variables and results in increased small-scale errors, which then propagate to large-scale variables through the coupling between large- and small-scale variables and lead to increased large-scale errors. Scale-dependent inflation applies larger inflation for large-scale variables and imposes no inflation for small-scale variables, since large-scale errors have larger magnitudes than small-scale ones and small-scale errors grow faster than large-scale ones. But IHEnKF-Ensemble that updates both the ensemble mean and perturbations with hybrid background error covariances is much less sensitive to scale-dependent inflation, compared to EnSRF and IHEnKF-Mean, because updating ensemble perturbations with hybrid background error covariances can play a role similar to the scale-dependent inflation. Significance Statement: Ensemble-based data assimilation and ensemble forecasts often have smaller ensemble spread than errors. Strategies used by ensemble-based data assimilation to combat insufficient ensemble spread usually focus on the short-term ensemble forecasts, rather than considering the whole ensemble forecasts over different lead times. Thus, there are obvious gaps between the ensemble-based data assimilation and ensemble forecasts. A scale-dependent inflation that can capture different error growths at different scales is proposed, which obtains improved consistency between ensemble spread and errors at different lead times and effectively links the ensemble-based data assimilation and ensemble forecasts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Accurate surface profile measurement using CMM without estimating tip correction vectors.
- Author
-
Watanabe, M., Sato, O., Matsuzaki, K., Kajima, M., Watanabe, T., Bitou, Y., and Takatsuji, T.
- Subjects
- *
CURVED surfaces , *INDUSTRIAL goods , *CURVATURE , *SAMPLING errors , *MEASUREMENT errors - Abstract
Detailed measurement of the curved surface of an industrial product with a radius of curvature of less than a few millimeters is a challenging task for tactile coordinate measuring machines. To estimate a surface profile, tip radius correction is typically performed by estimating the tip correction vector direction. However, a substantial measurement error is introduced by the error in estimating the tip correction vector direction under measurement conditions such as a large position measurement error of an indicated measured point or a short sampling interval, In this study, a method that can estimate a surface profile by calculating the envelope of a probe tip path was proposed. The proposed method was experimentally and numerically confirmed to be able to estimate surface profiles with sub-micrometer accuracy under such measurement conditions. • A novel method was developed for determining the profile of a surface with a curvature of less than few millimeters using a tactile coordinate measuring machine. • We verified the feasibility of using the proposed method that calculates the envelope of the probe tip path. • The surface profile obtained by the proposed method can be estimated with sub-micrometer accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Prediction error compensation method of FCSMPC for converter based on neural network.
- Author
-
Shen, Kun, Chen, Haoxiang, Zhang, Mengmei, and Wu, Mengyao
- Subjects
- *
ARTIFICIAL neural networks , *SIMULATION methods & models , *ONLINE education , *PREDICTION models , *SAMPLING errors - Abstract
FCSMPC is a classical converter predictive control algorithm whose control performance is affected by the prediction error of the prediction model. In classical predictive control theory, the feedback correction mechanism is used to compensate for such prediction error. However, when this strategy is directly applied to the FCSMPC algorithm, the prediction error cannot be easily calculated. To address the prediction error compensation problem of FCSMPC, this paper proposes a prediction error compensation method based on neural network. A neural network prediction model is also constructed based on the timing characteristics of prediction error signals. The prediction error of this prediction model at the next moment is calculated by the designed neural network model, and then the output of the prediction model is compensated at the current moment. To improve the anti-interference performance of FCSMPC, the MRSVD algorithm is used to filter the prediction error sample data and the neural networks are trained by these sample data. The adaptability of the prediction error calculation is further improved by combining offline training with the online calculation of the neural network. A simulation model of the proposed method is then constructed using MATLAB, and simulation results show that the control performance of the FCSMPC algorithm is improved and that the effectiveness and feasibility of the proposed method are verified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. High‐resolution melting analysis to authenticate deer‐derived materials in processed products in China using a cytochrome oxidase I mini‐barcode.
- Author
-
Feng, Jian, Ren, Qiqi, Xie, Anzhen, Jiang, Zixiao, and Liu, Yangyang
- Subjects
- *
SIKA deer , *CYTOCHROME oxidase , *RED deer , *PRODUCT counterfeiting , *SAMPLING errors - Abstract
BACKGROUND: Deer‐derived materials (antler, venison, fetus, penis, bone, tail, and others) are some of the most valuable traditional animal‐based medicinal and food materials in China. In production, processing, and trade, the quality of deer products varies. The market is confusing, and counterfeit and shoddy products are common. There is an urgent need to establish an accurate identification method. RESULTS: Two pairs of primers suitable for identifying deer‐derived medicinal materials were obtained by screening the cytochrome oxidase I (COI) sequences of 18 species from nine genera of the deer family. The two primers were used to identify the species and adulteration of 22 batches of commercially available deer‐derived products with a mini‐barcode combining high‐resolution melting (HRM) technology and methodical investigation. Deer‐derived materials (sika and red deer) were correctly identified by species using varying DNA amounts (1 to 500 ng). The two pairs of primers COI‐1FR and COI‐2FR yielded melting temperatures (Tm) of 80.55 to 81.00 °C and 82.00 to 82.50 °C for sika deer, and 81.00 to 82.00 °C and 81.40 to 82.00 °C for red deer. Twenty‐two batches of commercially available samples were analyzed by HRM analysis and conventional amplification sequencing, and it was found that the species samples had an error rate of species labeling of 31.8%. Four batches of samples were identified as mixed (adulterated) in the HRM analysis. CONCLUSION: The combination of DNA mini‐barcode with HRM analysis facilitated the accurate identification of species of deer‐derived materials, especially the identification of samples in an adulterated mixed state. © 2024 Society of Chemical Industry. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Neural learning control for sampled‐data nonlinear systems based on Euler approximation and first‐order filter.
- Author
-
Liang, Dengxiang and Wang, Min
- Subjects
- *
RADIAL basis functions , *APPROXIMATION error , *NONLINEAR systems , *ADAPTIVE control systems , *COMPUTATIONAL complexity , *SAMPLING errors - Abstract
The primary focus of this research paper is to explore the realm of dynamic learning in sampled‐data strict‐feedback nonlinear systems (SFNSs) by leveraging the capabilities of radial basis function (RBF) neural networks (NNs) under the framework of adaptive control. First, the exact discrete‐time model of the continuous‐time system is expressed as an Euler strict‐feedback model with a sampling approximation error. We provide the consistency condition that establishes the relationship between the exact model and the Euler model with meticulous detail. Meanwhile, a novel lemma is derived to show the stability condition of a digital first‐order filter. To address the non‐causality issues of SFNSs with sampling approximation error and the input data dimension explosion of NNs, the auxiliary digital first‐order filter and backstepping technology are combined to propose an adaptive neural dynamic surface control (ANDSC) scheme. Such a scheme avoids the n$$ n $$‐step time delays associated with the existing NN updating laws derived by the common n$$ n $$‐step predictor technology. A rigorous recursion method is employed to provide a comprehensive verification of the stability, guaranteeing its overall performance and dependability. Following that, the NN weight error systems are systematically decomposed into a sequence of linear time‐varying subsystems, allowing for a more detailed analysis and understanding. In order to ensure the recurrent nature of the input variables, a recursive design is employed, thereby satisfying the partial persistent excitation condition specifically designed for the RBF NNs. Meanwhile, it can verify that the NN estimated weights converge to their ideal values. Compared with the common n$$ n $$‐step predictor technology, there is no need to redesign the learning rules due to the designed NN weight updating laws without time delays. Subsequently, after capturing and storing the convergence weights, a novel neural learning dynamic surface control (NLDSC) scheme is specifically formulated by leveraging the acquired knowledge. The introduced methodology reduces computational complexity and facilitates practical implementation. Finally, empirical evidence obtained from simulation experiments validates the efficacy and viability of the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Tensile-WAXD Apparatus: An Improved and Accurate System for the In situ Study of Extension-induced Segmental Orientation in Highly Stretched Elastomer.
- Author
-
Shi, Xiang
- Subjects
- *
DIFFRACTION patterns , *TENSILE tests , *X-ray diffraction , *ELASTOMERS , *SAMPLING errors - Abstract
An improved X-ray apparatus that combines tensile testing and X-ray diffraction has been designed and constructed to conduct timeresolved experiments during uniaxial stretching. By utilizing mortise-like clamping jaws and dogbone-shaped specimens, this setup allows for the simultaneous recording of high-quality mechanical responses and 2D diffraction patterns due to the minimization of experimental errors from sample slippage or premature fracture. Furthermore, the local extension ratio can be accurately determined based on thickness variation, and the Hermans' orientation function was demonstrated to be a reliable method with high accuracy to calculate the segmental orientation parameter 〈P2〉 in elastomeric samples under high degree of stretching. In summary, this innovative tensile-WAXD instrument has proven to be a promising and powerful technique for investigating the "stress-deformation-segmental orientation" relationship in elastomers with high extensibilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Importance Sampling for Cost-Optimized Estimation of Burn Probability Maps in Wildfire Monte Carlo Simulations.
- Author
-
Waeselynck, Valentin and Saah, David
- Subjects
- *
MONTE Carlo method , *SAMPLING errors , *WILDFIRES , *QUANTITATIVE research , *PROBABILITY theory , *WILDFIRE prevention - Abstract
Background: Wildfire modelers rely on Monte Carlo simulations of wildland fire to produce burn probability maps. These simulations are computationally expensive. Methods: We study the application of importance sampling to accelerate the estimation of burn probability maps, using L2 distance as the metric of deviation. Results: Assuming a large area of interest, we prove that the optimal proposal distribution reweights the probability of ignitions by the square root of the expected burned area divided by the expected computational cost and then generalize these results to the assets-weighted L2 distance. We also propose a practical approach to searching for a good proposal distribution. Conclusions: These findings contribute quantitative methods for optimizing the precision/computation ratio of wildfire Monte Carlo simulations without biasing the results, offer a principled conceptual framework for justifying and reasoning about other computational shortcuts, and can be readily generalized to a broader spectrum of simulation-based risk modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Reducing the statistical error of generative adversarial networks using space‐filling sampling.
- Author
-
Wang, Sumin, Gao, Yuyou, Zhou, Yongdao, Pan, Bin, Xu, Xia, and Li, Tao
- Subjects
- *
GENERATIVE adversarial networks , *STATISTICAL errors , *SAMPLING errors - Abstract
This paper introduces a novel approach to reducing statistical errors in generative models, with a specific focus on generative adversarial networks (GANs). Inspired by the error analysis of GANs, we find that statistical errors mainly arise from random sampling, leading to significant uncertainties in GANs. To address this issue, we propose a selective sampling mechanism called space‐filling sampling. Our method aims to increase the sampling probability in areas with insufficient data, thereby improving the learning performance of the generator. Theoretical analysis confirms the effectiveness of our approach in reducing statistical errors and accelerating convergence in GANs. This research represents a pioneering effort in targeting the reduction of statistical errors in GANs, and it demonstrates the potential for enhancing the training of other generative models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Ranked Set Sampling Based Regression Estimators in Two-Stage Sampling Design.
- Author
-
Yanglem, Worthing and Khongji, Phrangstone
- Subjects
CLUSTER sampling ,REGRESSION analysis ,SAMPLING methods ,MEASUREMENT ,SAMPLING errors - Abstract
This paper examines the use of ranked set sampling in the second stage of the two-stage sampling design to improve the regression estimator. The mean square errors of the suggested estimators are computed, and both theoretical and simulation results demonstrate improved estimates as compared to the conventional sampling methods, particularly in situations with limited resources or costly measurements. This approach provides a promising method for enhancing regression analysis in complex sampling scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. How robust and how large is the description-experience gap? -- evidence from an eye-tracking research.
- Author
-
Nie, Dandan, Hu, Zhujing, Yang, Jianyong, and Zhu, Debiao
- Subjects
SAMPLING errors ,EXTREME value theory ,DECISION making ,EYE tracking ,EYE movements - Abstract
Abstract recent experimental evidence suggests a description-experience (D-E) gap in decision making, where rare events are overweighted in decisions from description but underweighted in decisions from experience. However, the robustness of the D-E gap is controversial. The role of sampling error and extreme outcomes on the D-E gap is unclear. To investigate the robustness of the D-E gap, we conducted two eye-tracking experiments to examine the magnitude of the D-E gap when sampling error is excluded and how the extremity of outcomes affects the D-E gap subjects' choices indicated controlling for sampling error, the D-E gap decreased but remained persistent. The fixation rate in description-based tasks was higher than their objective probabilities, but approximately equal to their objective probabilities in experience-based tasks. The impact of extreme outcomes was examined, and the results suggested the D-E gap decreases dramatically or even disappears in the presence of extreme values. Eye movement data suggested a higher fixation rate than its objective probability for extreme outcomes in experience-based tasks, indicating that when the rare event is extreme, it would be overweighted; this effect was not found in the non-extreme values. Our results indicate that the D-E gap is a robust effect, and that the effect is subject to a boundary, i.e., the extremity of the decision outcome. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Micro-computed Tomography in the Evaluation of Eosin-stained Axillary Lymph Node Biopsies of Females Diagnosed with Breast Cancer.
- Author
-
Laguna-Castro, Santiago, Salminen, Annukka, Arponen, Otso, Hannula, Markus, Rinta-Kiikka, Irina, Hyttinen, Jari, and Tolonen, Teemu
- Subjects
- *
CORE needle biopsy , *BREAST biopsy , *IMAGE analysis , *X-ray computed microtomography , *SAMPLING errors , *MAGNETIC resonance mammography - Abstract
Histopathological investigation of metastasis in core needle axillary lymph node (ALN) biopsies is crucial for the prognosis and treatment planning of breast cancer patients. Biopsies are typically sliced and evaluated as two-dimensional (2D) images. Biopsy sampling errors and the limited view provided by 2D histology are leading factors contributing to false-negative results in the preoperative detection of metastatic lymph nodes and underestimation of metastatic foci. In this proof-of-concept study, we aim to explore the technical feasibility and the potential capacities of tridimensional (3D) X-ray micro-computed tomography imaging to expedite error detection, enhancement of histopathological accuracy, and precise measurement of metastatic lesion on ALN core needle biopsies of two breast cancer patients. Our self-developed micro-CT protocol uses eosin for the first time, a common histological dye, to enhance 3D architecture of ALNs. Performed analysis on the images of the ALN biopsies involves cancer tissue segmentation, swift biopsy evaluation, and measurement of the metastatic longest diameter and deposit volume. The eosin micro-CT protocol shows potential for an improved tumor deposit estimates, offering additional clinical value compared to standard 2D histology, however, further studies for validating this method are needed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Development of a robust SNP marker set for genotyping diverse gene bank collections of polyploid roses.
- Author
-
Patzer, Laurine, Thomsen, Tim, Wamhoff, David, Schulz, Dietmar Frank, Linde, Marcus, and Debener, Thomas
- Subjects
- *
PLANT germplasm , *GENETIC variation , *BANK management , *GENOTYPES , *SAMPLING errors , *GERMPLASM - Abstract
Background: Due to genetic depletion in nature, gene banks play a critical role in the long-term conservation of plant genetic resources and the provision of a wide range of plant genetic diversity for research and breeding programs. Genetic information on accessions facilitates gene bank management and can help to conserve limited resources and to identify taxonomic misclassifications or mislabelling. Here, we developed SNP markers for genotyping 4,187 mostly polyploid rose accessions from large rose collections, including the German Genebank for Roses. Results: We filtered SNP marker information from the RhWag68k Axiom SNP array using call rates, uniformity of the four allelic dosage groups and chromosomal position to improve genotyping efficiency. After conversion to individual PACE® markers and further filtering, we selected markers with high discriminatory power. These markers were used to analyse 4,187 accessions with a mean call rate of 91.4%. By combining two evaluation methods, the mean call rate was increased to 95.2%. Additionally, the robustness against the genotypic groups used for calling was evaluated, resulting in a final set of 18 markers. Analyses of 94 pairs of assumed duplicate accessions included as controls revealed unexpected differences for eight pairs, which were confirmed using SSR markers. After removing the duplicates and filtering for accessions that were robustly called with all 18 markers, 141 out of the 1,957 accessions showed unexpected identical marker profiles with at least one other accession in our PACE® and SSR analysis. Given the attractiveness of NGS technologies, 13 SNPs from the marker set were also analysed using amplicon sequencing, with 76% agreement observed between PACE® and amplicon markers. Conclusions: Although sampling error cannot be completely excluded, this is an indication that mislabelling occurs in rose collections and that molecular markers may be able to detect these cases. In future applications, our marker set could be used to develop a core reference set of representative accessions, and thus optimise the selection of gene bank accessions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. The generalised resolvent-based turbulence estimation with arbitrarily sampled measurements in time.
- Subjects
COHERENT structures ,HEAT transfer in turbulent flow ,TURBULENT boundary layer ,NUMERICAL solutions to partial differential equations ,REYNOLDS stress ,SAMPLING errors ,MEASUREMENT errors ,EDDY viscosity ,HYPERSONIC aerodynamics - Abstract
The article in the Journal of Fluid Mechanics introduces the Generalised Resolvent-Based Turbulence Estimation (GRBE) method for accurately estimating turbulent flow states with arbitrarily sampled measurements in time. The study validates the GRBE by estimating the complex Ginzburg–Landau equation and turbulent channel flows, showing improved accuracy compared to the Resolvent-Based Estimation (RBE) in temporally unresolved cases. The text also discusses the application of different forcing models to inform the GRBE estimator, highlighting the importance of proper modeling strategies for forcing in the temporal domain. Additionally, the document contains a list of academic papers and research studies related to fluid mechanics and turbulent flows, contributing to the understanding of turbulent flows and their dynamics. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
50. A modification of the periodic nonuniform sampling involving derivatives with a Gaussian multiplier.
- Author
-
Asharabi, Rashad M. and Khirallah, Mustafa Q.
- Subjects
- *
IRREGULAR sampling (Signal processing) , *APPROXIMATION theory , *INTEGRAL functions , *EXPONENTIAL functions , *SAMPLING errors , *ANALYTIC functions - Abstract
The periodic nonuniform sampling series, involving periodic samples of both the function and its first r derivatives, was initially introduced by Nathan (Inform Control 22: 172–182, 1973). Since then, various authors have extended this sampling series in different contexts over the past decades. However, the application of the periodic nonuniform derivative sampling series in approximation theory has been limited due to its slow convergence. In this article, we introduce a modification to the periodic nonuniform sampling involving derivatives by incorporating a Gaussian multiplier. This modification results in a significantly improved convergence rate, which now follows an exponential order. This is a significant improvement compared to the original series, which had a convergence rate of O (N - 1 / p) where p > 1 . The introduced modification relies on a complex-analytic technique and is applicable to a wide range of functions. Specifically, it is suitable for the class of entire functions of exponential type that satisfy a decay condition, as well as for the class of analytic functions defined on a horizontal strip. To validate the presented theoretical analysis, the paper includes rigorous numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.