23 results on '"Loh, Tze Ping"'
Search Results
2. Functional Reference Limits: Describing Physiological Relationships and Determination of Physiological Limits for Enhanced Interpretation of Laboratory Results.
- Author
-
Chuah TY, Lim CY, Tan RZ, Pratumvinit B, Loh TP, Vasikaran S, and Markus C
- Subjects
- Humans, Reference Values, Biomarkers, Clinical Laboratory Techniques, Laboratories
- Abstract
Functional reference limits describe key changes in the physiological relationship between a pair of physiologically related components. Statistically, this can be represented by a significant change in the curvature of a mathematical function or curve (e.g., an observed plateau). The point at which the statistical relationship changes significantly is the point of curvature inflection and can be mathematically modeled from the relationship between the interrelated biomarkers. Conceptually, they reside between reference intervals, which describe the statistical boundaries of a single biomarker within the reference population, and clinical decision limits that are often linked to the risk of morbidity or mortality and set as thresholds. Functional reference limits provide important physiological and pathophysiological insights that can aid laboratory result interpretation. Laboratory professionals are in a unique position to harness data from laboratory information systems to derive clinically relevant values. Increasing research on and reporting of functional reference limits in the literature will enhance their contribution to laboratory medicine and widen the evidence base used in clinical decision limits, which are currently almost exclusively contributed to by clinical trials. Their inclusion in laboratory reports will enhance the intellectual value of laboratory professionals in clinical care beyond the statistical boundaries of a healthy reference population and pave the way to them being considered in shaping clinical decision limits. This review provides an overview of the concepts related to functional reference limits, clinical examples of their use, and the impetus to include them in laboratory reports.
- Published
- 2023
- Full Text
- View/download PDF
3. Between and within calibration variation: implications for internal quality control rules.
- Author
-
Lim CY, Lee JJS, Choy KW, Badrick T, Markus C, and Loh TP
- Subjects
- Male, Humans, Calibration, Quality Control, Bias, Laboratories, Prostate-Specific Antigen
- Abstract
The variability between calibrations can be larger than the within calibration variation for some measurement procedures, that is a large CV
between :CVwithin ratio. In this study, we examined the false rejection rate and probability of bias detection of quality control (QC) rules at varying calibration CVbetween :CVwithin ratios. Historical QC data for six representative routine clinical chemistry serum measurement procedures (calcium, creatinine, aspartate aminotransferase, thyrotrophin, prostate specific antigen and gentamicin) were extracted to derive the CVbetween :CVwithin ratios using analysis of variance. Additionally, the false rejection rate and probability of bias detection of three 'Westgard' QC rules (2:2S, 4:1S, 10X) at varying CVbetween :CVwithin ratios (0.1-10), magnitudes of bias, and QC events per calibration (5-80) were examined through simulation modelling. The CVbetween :CVwithin ratios for the six routine measurement procedures ranged from 1.1 to 34.5. With ratios >3, false rejection rates were generally above 10%. Similarly for QC rules involving a greater number of consecutive results, false rejection rates increased with increasing ratios, while all rules achieved maximum bias detection. Laboratories should avoid the 2:2S, 4:1S and 10X QC rules when calibration CVbetween :CVwithin ratios are elevated, particularly for those measurement procedures with a higher number of QC events per calibration., (Copyright © 2023 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.)- Published
- 2023
- Full Text
- View/download PDF
4. Lot-to-lot reagent verification: Effect of sample size and replicate measurement on linear regression approaches.
- Author
-
Koh NWX, Markus C, Loh TP, and Lim CY
- Subjects
- Humans, Indicators and Reagents, Least-Squares Analysis, Linear Models, Sample Size, Laboratories
- Abstract
Background: We investigate the simulated impact of varying sample size and replicate number using ordinary least squares (OLS) and Deming regression (DR) in both weighted and unweighted forms, when applied to paired measurements in lot-to-lot verification., Methods: Simulation parameter investigated in this study were: range ratio, analytical coefficient of variation, sample size, replicates, alpha (level of significance) and constant and proportional biases. For each simulation scenario, 10,000 iterations were performed, and the average probability of bias detection was determined., Results: Generally, the weighted forms of regression significantly outperformed the unweighted forms for bias detection. At the low range ratio (1:10), for both weighted OLS and DR, improved bias detection was observed with greater number of replicates, than increasing the number of comparison samples. At the high range ratio (1:1000), for both weighted OLS and DR, increasing the number of replicates above two is only slightly more advantageous in the scenarios examined. Increasing the numbers of comparison samples resulted in better detection of smaller biases between reagent lots., Conclusions: The results of this study allow laboratories to determine a tailored approach to lot-to-lot verification studies, balancing the number of replicates and comparison samples with the analytical performance of measurement procedures involved., (Copyright © 2022 Elsevier B.V. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
5. An Objective Approach to Deriving the Clinical Performance of Autoverification Limits.
- Author
-
Loh TP, Tan RZ, Lim CY, and Markus C
- Subjects
- Humans, Laboratories
- Abstract
This study describes an objective approach to deriving the clinical performance of autoverification rules to inform laboratory practice when implementing them. Anonymized historical laboratory data for 12 biochemistry measurands were collected and Box-Cox-transformed to approximate a Gaussian distribution. The historical laboratory data were assumed to be error-free. Using the probability theory, the clinical specificity of a set of autoverification limits can be derived by calculating the percentile values of the overall distribution of a measurand. The 5th and 95th percentile values of the laboratory data were calculated to achieve a 90% clinical specificity. Next, a predefined tolerable total error adopted from the Royal College of Pathologists of Australasia Quality Assurance Program was applied to the extracted data before subjecting to Box-Cox transformation. Using a standard normal distribution, the clinical sensitivity can be derived from the probability of the Z-value to the right of the autoverification limit for a one-tailed probability and multiplied by two for a two-tailed probability. The clinical sensitivity showed an inverse relationship with between-subject biological variation. The laboratory can set and assess the clinical performance of its autoverification rules that conforms to its desired risk profile.
- Published
- 2022
- Full Text
- View/download PDF
6. Comparison of two (data mining) indirect approaches for between-subject biological variation determination.
- Author
-
Tan RZ, Markus C, Vasikaran S, and Loh TP
- Subjects
- Computer Simulation, Data Mining, Humans, Reference Values, Biological Variation, Population, Laboratories
- Abstract
Background: Between-subject biological variation (CV
g ) is an important parameter in several aspects of laboratory practice, including setting of analytical performance specification, delta checks and calculation of index of individuality. Using simulations, we compare the performance of two indirect (data mining) approaches for deriving CVg ., Methods: The expected mean squares (EMS) method was compared against that proposed by Harris and Fraser. Using numerical simulations, d the percentage difference in the mean between the non-pathological and pathological populations, CVi the within-subject coefficient of variation of the non-pathological distribution, f the fraction of pathological values, and e the relative increase in CVi of the pathological distribution were varied for a total of 320 conditions to examine the impact on the relative fractional of error of the recovered CVg compared to the true value., Results: Comparing the two methods, the EMS and Harris and Fraser's approaches yielded similar performance of 158 conditions and 157 conditions within ± 0.20 fractional error of the true underlying CVg , for the normal and lognormal distributions, respectively. It is observed that both EMS and Harris and Fraser's method performed better using the calculated CVi rather than the actual ('presumptive') CVi . The number of conditions within 0.20 fractional error of the true underlying CVg did not differ significantly between the normal and lognormal distributions. The estimation of CVg improved with decreasing values of f, d and CVi CVg ., Discussions: The two statistical approaches included in this study showed reliable performance under the simulation conditions examined., (Copyright © 2022 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)- Published
- 2022
- Full Text
- View/download PDF
7. Comparison of six regression-based lot-to-lot verification approaches.
- Author
-
Koh NWX, Markus C, Loh TP, and Lim CY
- Subjects
- Bias, Computer Simulation, Humans, Indicators and Reagents, Laboratories
- Abstract
Objectives: Detection of between-lot reagent bias is clinically important and can be assessed by application of regression-based statistics on several paired measurements obtained from the existing and new candidate lot. Here, the bias detection capability of six regression-based lot-to-lot reagent verification assessments, including an extension of the Bland-Altman with regression approach are compared., Methods: Least squares and Deming regression (in both weighted and unweighted forms), confidence ellipses and Bland-Altman with regression (BA-R) approaches were investigated. The numerical simulation included permutations of the following parameters: differing result range ratios (upper:lower measurement limits), levels of significance (alpha), constant and proportional biases, analytical coefficients of variation (CV), and numbers of replicates and sample sizes. The sample concentrations simulated were drawn from a uniformly distributed concentration range., Results: At a low range ratio (1:10, CV 3%), the BA-R performed the best, albeit with a higher false rejection rate and closely followed by weighted regression approaches. At larger range ratios (1:1,000, CV 3%), the BA-R performed poorly and weighted regression approaches performed the best. At higher assay imprecision (CV 10%), all six approaches performed poorly with bias detection rates <50%. A lower alpha reduced the false rejection rate, while greater sample numbers and replicates improved bias detection., Conclusions: When performing reagent lot verification, laboratories need to finely balance the false rejection rate (selecting an appropriate alpha) with the power of bias detection (appropriate statistical approach to match assay performance characteristics) and operational considerations (number of clinical samples and replicates, not having alternate reagent lot)., (© 2022 Walter de Gruyter GmbH, Berlin/Boston.)
- Published
- 2022
- Full Text
- View/download PDF
8. Comparison of 8 methods for univariate statistical exclusion of pathological subpopulations for indirect reference intervals and biological variation studies.
- Author
-
Tan RZ, Markus C, Vasikaran S, and Loh TP
- Subjects
- Humans, Reference Values, Laboratories, Research Design
- Abstract
Background: Indirect reference intervals and biological variation studies heavily rely on statistical methods to separate pathological and non-pathological subpopulations within the same dataset. In recognition of this, we compare the performance of eight univariate statistical methods for identification and exclusion of values originating from pathological subpopulations., Methods: The eight approaches examined were: Tukey's rule with and without Box-Cox transformation; median absolute deviation; double median absolute deviation; Gaussian mixture models; van der Loo (Vdl) methods 1 and 2; and the Kosmic approach. Using four scenarios including lognormal distributions and varying the conditions through the number of pathological populations, central location, spread and proportion for a total of 256 simulated mixed populations. A performance criterion of ± 0.05 fractional error from the true underlying lower and upper reference interval was chosen., Results: Overall, the Kosmic method was a standout with the highest number of scenarios lying within the acceptable error, followed by Vdl method 1 and Tukey's rule. Kosmic and Vdl method 1 appears to discriminate better the non-pathological reference population in the case of log-normal distributed data. When the proportion and spread of pathological subpopulations is high, the performance of statistical exclusion deteriorated considerably., Discussions: It is important that laboratories use a priori defined clinical criteria to minimise the proportion of pathological subpopulation in a dataset prior to analysis. The curated dataset should then be carefully examined so that the appropriate statistical method can be applied., (Copyright © 2022 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
9. Lot-to-lot reagent verification: challenges and possible solutions.
- Author
-
Loh TP, Sandberg S, and Horvath AR
- Subjects
- Humans, Quality Control, Laboratories, Reagent Kits, Diagnostic
- Abstract
Lot-to-lot verification is an important laboratory activity that is performed to monitor the consistency of analytical performance over time. In this opinion paper, the concept, clinical impact, challenges and potential solutions for lot-to-lot verification are exained., (© 2022 Walter de Gruyter GmbH, Berlin/Boston.)
- Published
- 2022
- Full Text
- View/download PDF
10. Setting analytical performance specifications using HbA1c as a model measurand.
- Author
-
Loh TP, Smith AF, Bell KJL, Lord SJ, Ceriotti F, Jones G, Bossuyt P, Sandberg S, and Horvath AR
- Subjects
- Bias, Consensus, Glycated Hemoglobin analysis, Humans, Laboratories
- Abstract
Analytical performance specifications (APS) for measurands describe the minimum analytical quality requirements for their measurement. These APS are used to monitor and contain the systematic (trueness/bias) and random errors (precision/imprecision) of a laboratory measurement to ensure the results are "fit for purpose" in informing clinical decisions about managing a patient's health condition. In this review, we highlighted the wide variation in the setting of APS, using different levels of evidence, as recommended by the Milan Consensus, and approaches. The setting of a priori defined outcome-based APS for HbA1c remains challenging. Promising indirect alternatives seek to link the clinical utility of HbA1c and APS by defining statistical confidence for interpreting the laboratory values, or through simulation of clinical performance at varying levels of analytical performance. APS defined based on biological variation estimates in healthy individuals using the current formulae are unachievable by nearly all routine laboratory methods for HbA1c testing. On the other hand, the APS employed in external quality assurance programs have been progressively tightened, and greatly facilitate the improved quality of HbA1c testing. Laboratories should select the APS that fits their intended clinical use and should document the data and rationale underpinning those selections. Where possible common APS should be adopted across a region or country to facilitate the movement of patients and patient data across health care facilities., (Copyright © 2021 Elsevier B.V. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
11. Internal quality control: Moving average algorithms outperform Westgard rules.
- Author
-
Poh DKH, Lim CY, Tan RZ, Markus C, and Loh TP
- Subjects
- Humans, Algorithms, Laboratories, Models, Theoretical, Programming Languages, Quality Control
- Abstract
Introduction: Internal quality control (IQC) is traditionally interpreted against predefined control limits using multi-rules or 'Westgard rules'. These include the commonly used 1:3s and 2:2s rules. Either individually or in combination, these rules have limited sensitivity for detection of systematic errors. In this proof-of-concept study, we directly compare the performance of three moving average algorithms with Westgard rules for detection of systematic error., Methods: In this simulation study, 'error-free' IQC data (control case) was generated. Westgard rules (1:3s and 2:2s) and three moving average algorithms (simple moving average (SMA), weighted moving average (WMA), exponentially weighted moving average (EWMA); all using ±3SD as control limits) were applied to examine the false positive rates. Following this, systematic errors were introduced to the baseline IQC data to evaluate the probability of error detection and average number of episodes for error detection (ANEed)., Results: From the power function graphs, in comparison to Westgard rules, all three moving average algorithms showed better probability of error detection. Additionally, they also had lower ANEed compared to Westgard rules. False positive rates were comparable between the moving average algorithms and Westgard rules (all <0.5%). The performance of the SMA algorithm was comparable to the weighted algorithms forms (i.e. WMA and EWMA)., Conclusion: Application of an SMA algorithm on IQC data improves systematic error detection compared to Westgard rules. Application of SMA algorithms can simplify laboratories IQC strategy., (Copyright © 2021 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
12. Impact of combining data from multiple instruments on performance of patient-based real-time quality control.
- Author
-
Zhou Q, Loh TP, Badrick T, and Lim CY
- Subjects
- Humans, Algorithms, Laboratories, Monitoring, Physiologic, Quality Control
- Abstract
Introduction: It is unclear what is the best strategy for applying patient-based real-time quality control (PBRTQC) algorithm in the presence of multiple instruments. This simulation study compared the error detection capability of applying PBRTQC algorithms for instruments individually and in combination using serum sodium as an example., Materials and Methods: Four sets of random serum sodium measurements were generated with differing means and standard deviations to represent four simulated instruments. Moving median with winsorization was selected as the PBRTQC algorithm. The PBRTQC parameters (block size and control limits) were optimized and applied to the four simulated laboratory data sets individually and in combination., Results: When the PBRTQC algorithm were individually optimized and applied to the data of the individual simulated instruments, it was able to detect bias several folds faster than when they were combined. Similarly, the individually applied algorithms had perfect error detection rates across different magnitudes of bias, whereas the error detection rates of the algorithm applied on the combined data missed smaller biases. The performance of the individually applied PBRTQC algorithm performed more consistently among the simulated instruments compared to when the data were combined., Discussion: While combining data from different instruments can increase the data stream and hence, increase the speed of error detection, it may widen the control limits and compromising the probability of error detection. The presence of multiple instruments in the data stream may dilute the effect of the error when it only affects a selected instrument., Competing Interests: Potential conflict of interest None declared., (Croatian Society of Medical Biochemistry and Laboratory Medicine.)
- Published
- 2021
- Full Text
- View/download PDF
13. Evidence-based approach to setting delta check rules.
- Author
-
Markus C, Tan RZ, and Loh TP
- Subjects
- Humans, Quality Control, Reference Values, Laboratories
- Abstract
Delta checks are a post-analytical verification tool that compare the difference in sequential laboratory results belonging to the same patient against a predefined limit. This unique quality tool highlights a potential error at the individual patient level. A difference in sequential laboratory results that exceeds the predefined limit is considered likely to contain an error that requires further investigation that can be time and resource intensive. This may cause a delay in the provision of the result to the healthcare provider or entail recollection of the patient sample. Delta checks have been used primarily to detect sample misidentification (sample mix-up, wrong blood in tube), and recent advancements in laboratory medicine, including the adoption of protocolized procedures, information technology and automation in the total testing process, have significantly reduced the prevalence of such errors. As such, delta check rules need to be selected carefully to balance the clinical risk of these errors and the need to maintain operational efficiency. Historically, delta check rules have been set by professional opinion based on reference change values (biological variation) or the published literature. Delta check rules implemented in this manner may not inform laboratory practitioners of their real-world performance. This review discusses several evidence-based approaches to the optimal setting of delta check rules that directly inform the laboratory practitioner of the error detection capabilities of the selected rules. Subsequent verification of workflow for the selected delta check rules is also discussed. This review is intended to provide practical assistance to laboratories in setting evidence-based delta check rules that best suits their local operational and clinical needs.
- Published
- 2021
- Full Text
- View/download PDF
14. Recommendation for performance verification of patient-based real-time quality control.
- Author
-
Loh TP, Bietenbeck A, Cervinski MA, van Rossum HH, Katayev A, and Badrick T
- Subjects
- Humans, Time and Motion Studies, Laboratories standards, Quality Control
- Abstract
Patient-based real-time quality control (PBRTQC) is a laboratory tool for monitoring the performance of the testing process. It includes well-established procedures like Bull's algorithm, average of nomals, moving median, moving average (MA) and exponentially (weighted) MAs. Following the setup and optimization processes, a key step prior to the routine implementation of PBRTQC is the verification and documentation of the performance of the PBRTQC as part of the laboratory quality system. This verification process should provide a realistic representation of the performance of the PBRTQC in the environment it is being implemented in, to allow proper risk assessment by laboratory practitioners. This document focuses on the recommendation on performance verification of PBRTQC prior to implementation.
- Published
- 2020
- Full Text
- View/download PDF
15. Detecting reagent lot shifts using proficiency testing data.
- Author
-
Tan RZ, Punyalack W, Graham P, Badrick T, and Loh TP
- Subjects
- Bias, Humans, Indicators and Reagents standards, Peer Group, Quality Assurance, Health Care, Laboratories standards, Laboratory Proficiency Testing, Quality Control
- Abstract
Clinically significant systematic analytical shifts can evade detection despite between-lot reagent verification, quality control and proficiency testing systems practiced by most laboratories. Through numerical simulations, we present two methods to determine whether there has been a shift in the proficiency testing peer group of interest, peer group i, using the measurements from peer group i and J other peer groups. In method 1 ('group mean'), the distance of peer group i from the mean of the other J peer groups is used to determine whether a shift occurs. In method 2 ('inter-peer group' method), the distances of peer group i from each of the means of the other J peer groups are used to determine whether a shift has occurred. The power of detection for both methods increases with the magnitude of systematic shift, the number of peer groups, the number of laboratories within the peer groups and the proportion of laboratories within the affected peer group, and a smaller analytical imprecision. When the number of peer groups is low, the power of detection for the group mean method is comparable to the inter-peer group method, using the m = 1 criterion (a single inter-peer group comparison that exceeds the control limit is considered a flag). At larger peer groups, the inter-peer group method using the same (m = 1) criterion outperforms the group mean method. The proposed methods can elevate the professional role of the proficiency testing program to that of monitoring the peer group method on top of the performance of individual laboratories., (Copyright © 2019 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
16. Recommendations for laboratory informatics specifications needed for the application of patient-based real time quality control.
- Author
-
Loh TP, Cervinski MA, Katayev A, Bietenbeck A, van Rossum H, and Badrick T
- Subjects
- Humans, Information Storage and Retrieval, Quality Control, Research Design, Time Factors, Laboratories, Medical Informatics methods
- Abstract
Patient based real time Quality Control (PBRTQC) algorithms provide many advantages over conventional QC approaches including lower cost, absence of commutability problems, continuous real-time monitoring of performance, and sensitivity to pre-analytical error. However, PBRTQC is not as simple to implement as conventional QC because of the requirement to access patient data as well as setting up appropriate rules, action protocols, and choosing best statistical algorithms. These requirements need capable and flexible laboratory informatics (middleware). In this document, the necessary features of software packages needed to support PBRTQC are discussed as well as recommendations for optimal integration of this technique into laboratory practice., (Copyright © 2019 Elsevier B.V. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
17. Verification of out-of-control situations detected by "average of normal" approach.
- Author
-
Liu J, Tan CH, Loh TP, and Badrick T
- Subjects
- Chemistry, Clinical methods, Humans, Quality Control, Reference Values, Laboratories organization & administration
- Abstract
Objectives: "Average of normal" (AoN) or "moving average" is increasingly used as an adjunct quality control tool in laboratory practice. Little guidance exists on how to verify if an out-of-control situation in the AoN chart is due to a shift in analytical performance, or underlying patient characteristics., Design and Methods: Through simulation based on clinical data, we examined 1) the location of the last apparently stable period in the AoN control chart after an analytical shift, and 2) an approach to verify if the observed shift is related to an analytical shift by repeat testing of archived patient samples from the stable period for 21 common analytes., Results: The number of blocks of results to look back for the stable period increased with the duration of the analytical shift, and was larger when smaller AoN block sizes were used. To verify an analytical shift, 3 archived samples from the analytically stable period should be retested. In particular, the process is deemed to have shifted if a difference of >2 analytical standard deviations (i.e. 1:2s rejection rule) between the original and retested results are observed in any of the 3 samples produced. The probability of Type-1 error (i.e., false rejection) and power (i.e., detecting true analytical shift) of this rule are <0.1 and >0.9, respectively., Conclusions: The use of appropriately archived patient samples to verify an apparent analytical shift is preferred to quality control materials. Nonetheless, the above findings may also apply to quality control materials, barring matrix effects., (Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2016
- Full Text
- View/download PDF
18. An automated and objective method for age partitioning of reference intervals based on continuous centile curves.
- Author
-
Yang Q, Lew HY, Peh RH, Metz MP, and Loh TP
- Subjects
- Adolescent, Age Distribution, Alkaline Phosphatase blood, Child, Child, Preschool, Creatinine blood, Female, Humans, Infant, Infant, Newborn, Reference Values, Young Adult, Laboratories, Pediatrics methods, Software
- Abstract
Reference intervals are the most commonly used decision support tool when interpreting quantitative laboratory results. They may require partitioning to better describe subpopulations that display significantly different reference values. Partitioning by age is particularly important for the paediatric population since there are marked physiological changes associated with growth and maturation. However, most partitioning methods are either technically complex or require prior knowledge of the underlying physiology/biological variation of the population. There is growing interest in the use of continuous centile curves, which provides seamless laboratory reference values as a child grows, as an alternative to rigidly described fixed reference intervals. However, the mathematical functions that describe these curves can be complex and may not be easily implemented in laboratory information systems. Hence, the use of fixed reference intervals is expected to continue for a foreseeable time. We developed a method that objectively proposes optimised age partitions and reference intervals for quantitative laboratory data (http://research.sph.nus.edu.sg/pp/ppResult.aspx), based on the sum of gradient that best describes the underlying distribution of the continuous centile curves. It is hoped that this method may improve the selection of age intervals for partitioning, which is receiving increasing attention in paediatric laboratory medicine., (Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.)
- Published
- 2016
- Full Text
- View/download PDF
19. Impact of phlebotomy decision support application on sample collection errors and laboratory efficiency.
- Author
-
Loh TP, Saw S, Chai V, and Sethi SK
- Subjects
- Clinical Audit, Humans, Phlebotomy standards, Research Design statistics & numerical data, Decision Support Techniques, Laboratories statistics & numerical data, Phlebotomy methods
- Published
- 2011
- Full Text
- View/download PDF
20. Linearity assessment: deviation from linearity and residual of linear regression approaches.
- Author
-
Lim, Chun Yee, Lee, Xavier, Tran, Mai Thi Chi, Markus, Corey, Loh, Tze Ping, Ho, Chung Shun, Theodorsson, Elvar, Greaves, Ronda F., Cooke, Brian R., and Zakaria, Rosita
- Subjects
EVALUATION methodology ,COMPUTER simulation ,EXPERIMENTAL design ,MEASUREMENT ,LABORATORIES - Abstract
In this computer simulation study, we examine four different statistical approaches of linearity assessment, including two variants of deviation from linearity (individual (IDL) and averaged (AD)), along with detection capabilities of residuals of linear regression (individual and averaged). From the results of the simulation, the following broad suggestions are provided to laboratory practitioners when performing linearity assessment. A high imprecision can challenge linearity investigations by producing a high false positive rate or low power of detection. Therefore, the imprecision of the measurement procedure should be considered when interpreting linearity assessment results. In the presence of high imprecision, the results of linearity assessment should be interpreted with caution. Different linearity assessment approaches examined in this study performed well under different analytical scenarios. For optimal outcomes, a considered and tailored study design should be implemented. With the exception of specific scenarios, both ADL and IDL methods were suboptimal for the assessment of linearity compared. When imprecision is low (3 %), averaged residual of linear regression with triplicate measurements and a non-linearity acceptance limit of 5 % produces <5 % false positive rates and a high power for detection of non-linearity of >70 % across different types and degrees of non-linearity. Detection of departures from linearity are difficult to identify in practice and enhanced methods of detection need development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Laboratory practices to mitigate biohazard risks during the COVID-19 outbreak: an IFCC global survey.
- Author
-
Loh, Tze Ping, Horvath, Andrea Rita, Wang, Cheng-Bin, Koch, David, Lippi, Giuseppe, Mancini, Nicasio, Ferrari, Maurizio, Hawkins, Robert, Sethi, Sunil, and Adeli, Khosrow
- Subjects
- *
COVID-19 pandemic , *COVID-19 , *HOSPITAL laboratories , *LABORATORIES , *HAZARDOUS substances , *PERSONAL protective equipment , *MEDICAL masks , *SURGICAL gloves - Abstract
Objectives: A global survey was conducted by the IFCC Task Force on COVID-19 to better understand how general biochemistry laboratories manage the pre-analytical, analytical and post-analytical processes to mitigate biohazard risks during the coronavirus disease 2019 (COVID-19) pandemic. Methods: An electronic survey was developed to record the general characteristics of the laboratory, as well as the pre-analytical, analytical, post-analytical and operational practices of biochemistry laboratories that are managing clinical samples of patients with COVID-19. Results: A total of 1210 submissions were included in the analysis. The majority of responses came from hospital central/core laboratories that serve hospital patient groups and handle moderate daily sample volumes. There has been a decrease in the use of pneumatic tube transport, increase in hand delivery and increase in number of layers of plastic bags for samples of patients with clinically suspected or confirmed COVID-19. Surgical face masks and gloves are the most commonly used personal protective equipment (PPE). Just >50% of the laboratories did not perform an additional decontamination step on the instrument after analysis of samples from patients with clinically suspected or confirmed COVID-19. A fifth of laboratories disallowed add-on testing on these samples. Less than a quarter of laboratories autoclaved their samples prior to disposal. Conclusions: The survey responses showed wide variation in pre-analytical, analytical and post-analytical practices in terms of PPE adoption and biosafety processes. It is likely that many of the suboptimal biosafety practices are related to practical local factors, such as limited PPE availability and lack of automated instrumentation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Operational considerations and challenges of biochemistry laboratories during the COVID-19 outbreak: an IFCC global survey.
- Author
-
Loh, Tze Ping, Horvath, Andrea Rita, Wang, Cheng-Bin, Koch, David, Adeli, Khosrow, Mancini, Nicasio, Ferrari, Maurizio, Hawkins, Robert, Sethi, Sunil, and Lippi, Giuseppe
- Subjects
- *
COVID-19 pandemic , *COVID-19 , *LABORATORIES , *HOSPITAL laboratories , *PERSONAL protective equipment , *CHEMICAL laboratories , *BIOCHEMISTRY - Abstract
Objectives: The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) Task Force on COVID-19 conducted a global survey to understand how biochemistry laboratories manage the operational challenges during the coronavirus disease 2019 (COVID-19) pandemic. Materials and methods: An electronic survey was distributed globally to record the operational considerations to mitigate biosafety risks in the laboratory. Additionally, the laboratories were asked to indicate the operational challenges they faced. Results: A total of 1210 valid submissions were included in this analysis. Most of the survey participants worked in hospital laboratories. Around 15% of laboratories restricted certain tests on patients with clinically suspected or confirmed COVID-19 over biosafety concerns. Just over 10% of the laboratories had to restrict their test menu or services due to resource constraints. Approximately a third of laboratories performed temperature monitoring, while two thirds of laboratories increased the frequency of disinfection. Just less than 50% of the laboratories split their teams. The greatest reported challenge faced by laboratories during the COVID-19 pandemic is securing sufficient supplies of personal protective equipment (PPE), analytical equipment, including those used at the point of care, as well as reagents, consumables and other laboratory materials. This was followed by having inadequate staff, managing their morale, anxiety and deployment. Conclusions: The restriction of tests and services may have undesirable clinical consequences as clinicians are deprived of important information to deliver appropriate care to their patients. Staff rostering and biosafety concerns require longer-term solutions as they are crucial for the continued operation of the laboratory during what may well be a prolonged pandemic. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Key questions about the future of laboratory medicine in the next decade of the 21st century: A report from the IFCC-Emerging Technologies Division.
- Author
-
Greaves, Ronda F., Bernardini, Sergio, Ferrari, Maurizio, Fortina, Paolo, Gouget, Bernard, Gruson, Damien, Lang, Tim, Loh, Tze Ping, Morris, Howard A., Park, Jason Y., Roessler, Markus, Yin, Peng, and Kricka, Larry J.
- Subjects
- *
CLINICAL pathology , *LABORATORIES , *TWENTY-first century , *PATHOLOGICAL laboratories , *ECONOMIC opportunities , *MASS spectrometry , *GREEN technology - Abstract
This review advances the discussion about the future of laboratory medicine in the 2020s. In five major topic areas: 1. the "big picture" of healthcare; 2. pre-analytical factors; 3. Analytical factors; 4. post-analytical factors; and 5. relationships, which explores a next decade perspective on laboratory medicine and the likely impact of the predicted changes by means of a number of carefully focused questions that draw upon predictions made since 2013. The "big picture" of healthcare explores the effects of changing patient populations, the brain-to-brain loop, direct access testing, robots and total laboratory automation, and green technologies and sustainability. The pre-analytical section considers the role of different sample types, drones, and biobanks. The analytical section examines advances in point-of-care testing, mass spectrometry, genomics, gene and immunotherapy, 3D-printing, and total laboratory quality. The post-analytical section discusses the value of laboratory medicine, the emerging role of artificial intelligence, the management and interpretation of omics data, and common reference intervals and decision limits. Finally, the relationships section explores the role of laboratory medicine scientific societies, the educational needs of laboratory professionals, communication, the relationship between laboratory professionals and clinicians, laboratory medicine financing, and the anticipated economic opportunities and outcomes in the 2020's. • Laboratory medicine is shaped by technology, society, politics, economics, and regulations. • Technological advancements will drive pre-, post- and analytical practice changes. • Stronger partnerships will underscore relationships within and outside of the laboratory. • Capitalising on these new opportunities will enhance the value of the laboratory. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.