11 results on '"Markus, Corey"'
Search Results
2. Functional Reference Limits: Describing Physiological Relationships and Determination of Physiological Limits for Enhanced Interpretation of Laboratory Results.
- Author
-
Chuah TY, Lim CY, Tan RZ, Pratumvinit B, Loh TP, Vasikaran S, and Markus C
- Subjects
- Humans, Reference Values, Biomarkers, Clinical Laboratory Techniques, Laboratories
- Abstract
Functional reference limits describe key changes in the physiological relationship between a pair of physiologically related components. Statistically, this can be represented by a significant change in the curvature of a mathematical function or curve (e.g., an observed plateau). The point at which the statistical relationship changes significantly is the point of curvature inflection and can be mathematically modeled from the relationship between the interrelated biomarkers. Conceptually, they reside between reference intervals, which describe the statistical boundaries of a single biomarker within the reference population, and clinical decision limits that are often linked to the risk of morbidity or mortality and set as thresholds. Functional reference limits provide important physiological and pathophysiological insights that can aid laboratory result interpretation. Laboratory professionals are in a unique position to harness data from laboratory information systems to derive clinically relevant values. Increasing research on and reporting of functional reference limits in the literature will enhance their contribution to laboratory medicine and widen the evidence base used in clinical decision limits, which are currently almost exclusively contributed to by clinical trials. Their inclusion in laboratory reports will enhance the intellectual value of laboratory professionals in clinical care beyond the statistical boundaries of a healthy reference population and pave the way to them being considered in shaping clinical decision limits. This review provides an overview of the concepts related to functional reference limits, clinical examples of their use, and the impetus to include them in laboratory reports.
- Published
- 2023
- Full Text
- View/download PDF
3. Between and within calibration variation: implications for internal quality control rules.
- Author
-
Lim CY, Lee JJS, Choy KW, Badrick T, Markus C, and Loh TP
- Subjects
- Male, Humans, Calibration, Quality Control, Bias, Laboratories, Prostate-Specific Antigen
- Abstract
The variability between calibrations can be larger than the within calibration variation for some measurement procedures, that is a large CV
between :CVwithin ratio. In this study, we examined the false rejection rate and probability of bias detection of quality control (QC) rules at varying calibration CVbetween :CVwithin ratios. Historical QC data for six representative routine clinical chemistry serum measurement procedures (calcium, creatinine, aspartate aminotransferase, thyrotrophin, prostate specific antigen and gentamicin) were extracted to derive the CVbetween :CVwithin ratios using analysis of variance. Additionally, the false rejection rate and probability of bias detection of three 'Westgard' QC rules (2:2S, 4:1S, 10X) at varying CVbetween :CVwithin ratios (0.1-10), magnitudes of bias, and QC events per calibration (5-80) were examined through simulation modelling. The CVbetween :CVwithin ratios for the six routine measurement procedures ranged from 1.1 to 34.5. With ratios >3, false rejection rates were generally above 10%. Similarly for QC rules involving a greater number of consecutive results, false rejection rates increased with increasing ratios, while all rules achieved maximum bias detection. Laboratories should avoid the 2:2S, 4:1S and 10X QC rules when calibration CVbetween :CVwithin ratios are elevated, particularly for those measurement procedures with a higher number of QC events per calibration., (Copyright © 2023 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.)- Published
- 2023
- Full Text
- View/download PDF
4. Lot-to-lot reagent verification: Effect of sample size and replicate measurement on linear regression approaches.
- Author
-
Koh NWX, Markus C, Loh TP, and Lim CY
- Subjects
- Humans, Indicators and Reagents, Least-Squares Analysis, Linear Models, Sample Size, Laboratories
- Abstract
Background: We investigate the simulated impact of varying sample size and replicate number using ordinary least squares (OLS) and Deming regression (DR) in both weighted and unweighted forms, when applied to paired measurements in lot-to-lot verification., Methods: Simulation parameter investigated in this study were: range ratio, analytical coefficient of variation, sample size, replicates, alpha (level of significance) and constant and proportional biases. For each simulation scenario, 10,000 iterations were performed, and the average probability of bias detection was determined., Results: Generally, the weighted forms of regression significantly outperformed the unweighted forms for bias detection. At the low range ratio (1:10), for both weighted OLS and DR, improved bias detection was observed with greater number of replicates, than increasing the number of comparison samples. At the high range ratio (1:1000), for both weighted OLS and DR, increasing the number of replicates above two is only slightly more advantageous in the scenarios examined. Increasing the numbers of comparison samples resulted in better detection of smaller biases between reagent lots., Conclusions: The results of this study allow laboratories to determine a tailored approach to lot-to-lot verification studies, balancing the number of replicates and comparison samples with the analytical performance of measurement procedures involved., (Copyright © 2022 Elsevier B.V. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
5. An Objective Approach to Deriving the Clinical Performance of Autoverification Limits.
- Author
-
Loh TP, Tan RZ, Lim CY, and Markus C
- Subjects
- Humans, Laboratories
- Abstract
This study describes an objective approach to deriving the clinical performance of autoverification rules to inform laboratory practice when implementing them. Anonymized historical laboratory data for 12 biochemistry measurands were collected and Box-Cox-transformed to approximate a Gaussian distribution. The historical laboratory data were assumed to be error-free. Using the probability theory, the clinical specificity of a set of autoverification limits can be derived by calculating the percentile values of the overall distribution of a measurand. The 5th and 95th percentile values of the laboratory data were calculated to achieve a 90% clinical specificity. Next, a predefined tolerable total error adopted from the Royal College of Pathologists of Australasia Quality Assurance Program was applied to the extracted data before subjecting to Box-Cox transformation. Using a standard normal distribution, the clinical sensitivity can be derived from the probability of the Z-value to the right of the autoverification limit for a one-tailed probability and multiplied by two for a two-tailed probability. The clinical sensitivity showed an inverse relationship with between-subject biological variation. The laboratory can set and assess the clinical performance of its autoverification rules that conforms to its desired risk profile.
- Published
- 2022
- Full Text
- View/download PDF
6. Comparison of two (data mining) indirect approaches for between-subject biological variation determination.
- Author
-
Tan RZ, Markus C, Vasikaran S, and Loh TP
- Subjects
- Computer Simulation, Data Mining, Humans, Reference Values, Biological Variation, Population, Laboratories
- Abstract
Background: Between-subject biological variation (CV
g ) is an important parameter in several aspects of laboratory practice, including setting of analytical performance specification, delta checks and calculation of index of individuality. Using simulations, we compare the performance of two indirect (data mining) approaches for deriving CVg ., Methods: The expected mean squares (EMS) method was compared against that proposed by Harris and Fraser. Using numerical simulations, d the percentage difference in the mean between the non-pathological and pathological populations, CVi the within-subject coefficient of variation of the non-pathological distribution, f the fraction of pathological values, and e the relative increase in CVi of the pathological distribution were varied for a total of 320 conditions to examine the impact on the relative fractional of error of the recovered CVg compared to the true value., Results: Comparing the two methods, the EMS and Harris and Fraser's approaches yielded similar performance of 158 conditions and 157 conditions within ± 0.20 fractional error of the true underlying CVg , for the normal and lognormal distributions, respectively. It is observed that both EMS and Harris and Fraser's method performed better using the calculated CVi rather than the actual ('presumptive') CVi . The number of conditions within 0.20 fractional error of the true underlying CVg did not differ significantly between the normal and lognormal distributions. The estimation of CVg improved with decreasing values of f, d and CVi CVg ., Discussions: The two statistical approaches included in this study showed reliable performance under the simulation conditions examined., (Copyright © 2022 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)- Published
- 2022
- Full Text
- View/download PDF
7. Comparison of six regression-based lot-to-lot verification approaches.
- Author
-
Koh NWX, Markus C, Loh TP, and Lim CY
- Subjects
- Bias, Computer Simulation, Humans, Indicators and Reagents, Laboratories
- Abstract
Objectives: Detection of between-lot reagent bias is clinically important and can be assessed by application of regression-based statistics on several paired measurements obtained from the existing and new candidate lot. Here, the bias detection capability of six regression-based lot-to-lot reagent verification assessments, including an extension of the Bland-Altman with regression approach are compared., Methods: Least squares and Deming regression (in both weighted and unweighted forms), confidence ellipses and Bland-Altman with regression (BA-R) approaches were investigated. The numerical simulation included permutations of the following parameters: differing result range ratios (upper:lower measurement limits), levels of significance (alpha), constant and proportional biases, analytical coefficients of variation (CV), and numbers of replicates and sample sizes. The sample concentrations simulated were drawn from a uniformly distributed concentration range., Results: At a low range ratio (1:10, CV 3%), the BA-R performed the best, albeit with a higher false rejection rate and closely followed by weighted regression approaches. At larger range ratios (1:1,000, CV 3%), the BA-R performed poorly and weighted regression approaches performed the best. At higher assay imprecision (CV 10%), all six approaches performed poorly with bias detection rates <50%. A lower alpha reduced the false rejection rate, while greater sample numbers and replicates improved bias detection., Conclusions: When performing reagent lot verification, laboratories need to finely balance the false rejection rate (selecting an appropriate alpha) with the power of bias detection (appropriate statistical approach to match assay performance characteristics) and operational considerations (number of clinical samples and replicates, not having alternate reagent lot)., (© 2022 Walter de Gruyter GmbH, Berlin/Boston.)
- Published
- 2022
- Full Text
- View/download PDF
8. Comparison of 8 methods for univariate statistical exclusion of pathological subpopulations for indirect reference intervals and biological variation studies.
- Author
-
Tan RZ, Markus C, Vasikaran S, and Loh TP
- Subjects
- Humans, Reference Values, Laboratories, Research Design
- Abstract
Background: Indirect reference intervals and biological variation studies heavily rely on statistical methods to separate pathological and non-pathological subpopulations within the same dataset. In recognition of this, we compare the performance of eight univariate statistical methods for identification and exclusion of values originating from pathological subpopulations., Methods: The eight approaches examined were: Tukey's rule with and without Box-Cox transformation; median absolute deviation; double median absolute deviation; Gaussian mixture models; van der Loo (Vdl) methods 1 and 2; and the Kosmic approach. Using four scenarios including lognormal distributions and varying the conditions through the number of pathological populations, central location, spread and proportion for a total of 256 simulated mixed populations. A performance criterion of ± 0.05 fractional error from the true underlying lower and upper reference interval was chosen., Results: Overall, the Kosmic method was a standout with the highest number of scenarios lying within the acceptable error, followed by Vdl method 1 and Tukey's rule. Kosmic and Vdl method 1 appears to discriminate better the non-pathological reference population in the case of log-normal distributed data. When the proportion and spread of pathological subpopulations is high, the performance of statistical exclusion deteriorated considerably., Discussions: It is important that laboratories use a priori defined clinical criteria to minimise the proportion of pathological subpopulation in a dataset prior to analysis. The curated dataset should then be carefully examined so that the appropriate statistical method can be applied., (Copyright © 2022 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
9. Internal quality control: Moving average algorithms outperform Westgard rules.
- Author
-
Poh DKH, Lim CY, Tan RZ, Markus C, and Loh TP
- Subjects
- Humans, Algorithms, Laboratories, Models, Theoretical, Programming Languages, Quality Control
- Abstract
Introduction: Internal quality control (IQC) is traditionally interpreted against predefined control limits using multi-rules or 'Westgard rules'. These include the commonly used 1:3s and 2:2s rules. Either individually or in combination, these rules have limited sensitivity for detection of systematic errors. In this proof-of-concept study, we directly compare the performance of three moving average algorithms with Westgard rules for detection of systematic error., Methods: In this simulation study, 'error-free' IQC data (control case) was generated. Westgard rules (1:3s and 2:2s) and three moving average algorithms (simple moving average (SMA), weighted moving average (WMA), exponentially weighted moving average (EWMA); all using ±3SD as control limits) were applied to examine the false positive rates. Following this, systematic errors were introduced to the baseline IQC data to evaluate the probability of error detection and average number of episodes for error detection (ANEed)., Results: From the power function graphs, in comparison to Westgard rules, all three moving average algorithms showed better probability of error detection. Additionally, they also had lower ANEed compared to Westgard rules. False positive rates were comparable between the moving average algorithms and Westgard rules (all <0.5%). The performance of the SMA algorithm was comparable to the weighted algorithms forms (i.e. WMA and EWMA)., Conclusion: Application of an SMA algorithm on IQC data improves systematic error detection compared to Westgard rules. Application of SMA algorithms can simplify laboratories IQC strategy., (Copyright © 2021 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
10. Evidence-based approach to setting delta check rules.
- Author
-
Markus C, Tan RZ, and Loh TP
- Subjects
- Humans, Quality Control, Reference Values, Laboratories
- Abstract
Delta checks are a post-analytical verification tool that compare the difference in sequential laboratory results belonging to the same patient against a predefined limit. This unique quality tool highlights a potential error at the individual patient level. A difference in sequential laboratory results that exceeds the predefined limit is considered likely to contain an error that requires further investigation that can be time and resource intensive. This may cause a delay in the provision of the result to the healthcare provider or entail recollection of the patient sample. Delta checks have been used primarily to detect sample misidentification (sample mix-up, wrong blood in tube), and recent advancements in laboratory medicine, including the adoption of protocolized procedures, information technology and automation in the total testing process, have significantly reduced the prevalence of such errors. As such, delta check rules need to be selected carefully to balance the clinical risk of these errors and the need to maintain operational efficiency. Historically, delta check rules have been set by professional opinion based on reference change values (biological variation) or the published literature. Delta check rules implemented in this manner may not inform laboratory practitioners of their real-world performance. This review discusses several evidence-based approaches to the optimal setting of delta check rules that directly inform the laboratory practitioner of the error detection capabilities of the selected rules. Subsequent verification of workflow for the selected delta check rules is also discussed. This review is intended to provide practical assistance to laboratories in setting evidence-based delta check rules that best suits their local operational and clinical needs.
- Published
- 2021
- Full Text
- View/download PDF
11. Linearity assessment: deviation from linearity and residual of linear regression approaches.
- Author
-
Lim, Chun Yee, Lee, Xavier, Tran, Mai Thi Chi, Markus, Corey, Loh, Tze Ping, Ho, Chung Shun, Theodorsson, Elvar, Greaves, Ronda F., Cooke, Brian R., and Zakaria, Rosita
- Subjects
EVALUATION methodology ,COMPUTER simulation ,EXPERIMENTAL design ,MEASUREMENT ,LABORATORIES - Abstract
In this computer simulation study, we examine four different statistical approaches of linearity assessment, including two variants of deviation from linearity (individual (IDL) and averaged (AD)), along with detection capabilities of residuals of linear regression (individual and averaged). From the results of the simulation, the following broad suggestions are provided to laboratory practitioners when performing linearity assessment. A high imprecision can challenge linearity investigations by producing a high false positive rate or low power of detection. Therefore, the imprecision of the measurement procedure should be considered when interpreting linearity assessment results. In the presence of high imprecision, the results of linearity assessment should be interpreted with caution. Different linearity assessment approaches examined in this study performed well under different analytical scenarios. For optimal outcomes, a considered and tailored study design should be implemented. With the exception of specific scenarios, both ADL and IDL methods were suboptimal for the assessment of linearity compared. When imprecision is low (3 %), averaged residual of linear regression with triplicate measurements and a non-linearity acceptance limit of 5 % produces <5 % false positive rates and a high power for detection of non-linearity of >70 % across different types and degrees of non-linearity. Detection of departures from linearity are difficult to identify in practice and enhanced methods of detection need development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.