923 results on '"précision"'
Search Results
2. Accuracy of intraoral scanners in maxillary multiple restorations: An in vitro study.
- Author
-
Aung, Hlaing Myint Myat, Linn, Thu Ya, Lee, Wei-Fang, Chao, Jen-Chih, Teng, Nai-Chia, Renn, Ting-Yi, and Chang, Wei-Jen
- Subjects
SCANNING systems ,LENGTH measurement ,ACQUISITION of data ,EDENTULOUS mouth ,IN vitro studies - Abstract
The accuracy of intraoral scanners (IOSs) plays a crucial role in the success of final restorations in digital workflows. Previous studies have shown that numerous factors affect the accuracy of IOSs. Most studies have evaluated the accuracy of IOS under one restoration condition. Therefore, the aim of this study was to evaluate the accuracy of two IOSs with different data acquisition methods across multiple restorations. A partially edentulous model with preparations were created and scanned using the laboratory scanner E4 as the reference model. Two IOSs, Trios 3 and Virtuo Vivo, were used in this study. Each scan was performed in same scanning strategy. Trueness and precision of each scan was compared by surface-matching software, and the data were statistically analyzed. Trios 3 showed no significant difference in trueness of full arch, single crown, and edentulous area, except for 3-unit bridge area than Virtuo Vivo (P = 0.008). However, Virtuo Vivo showed better precision than Trios 3 (P = 0.003). There was no differ in linear dental measurements between two scanners. We found Trios 3 had better trueness in 3-unit bridge area compared to Virto Vivo, but there was no significant difference in the other preparation areas. While Virtuo Vivo showed better precision. Our results can provide insights for the selection of IOSs for various restorations in clinical practice. However, this is an in vitro study, the chairside challenges of IOSs should be considered. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Comparison of precision of implant placement between two different guided systems for static computer-assisted implant surgery: A simulation-based experimental study.
- Author
-
Pattanasirikun, Papon, Arunjaroensuk, Sirida, Panya, Sappasith, Subbalekha, Keskanya, Mattheos, Nikos, and Pimkhaokham, Atiphan
- Subjects
OPERATIVE dentistry ,DENTAL drilling ,DENTAL implants ,COMPUTER-aided design ,SLEEVES - Abstract
Many designs of static computer-assisted implant surgery (sCAIS) are available for clinician to achieve proper implant position. However, there were not any studies that approached the design alone to evaluate whether sleeve-in-sleeve or sleeve-on-drill design provided most accuracy implant position. The purpose of this study was to investigate the precision of implant placement with sleeve-in-sleeve and sleeve-on-drill static computer assisted implant surgery (sCAIS) designs. Thirty-two models were fabricated simulating a patient with bilateral missing first premolar. Eight models (sixteen implants) were assigned in each group: Group A, B and C represented sleeve-in-sleeve design with 2, 4 and 6 mm sleeve height respectively. Group D represented integrated sleeve-on-drill design with 4 mm sleeve height. 3D deviation at implant platform, apex and angular deviation were measured. Data were analyzed using one way ANOVA (P < 0.05). The overall deviation at platform ranged from 0.40 ± 0.14 mm (group A) to 0.73 ± 1.54 mm (group C), at apex from 0.46 ± 0.16 mm (group A) to 1.07 ± 0.37 mm (group C) and the angular deviation ranged from 0.86 ± 0.89° (group A) to 3.40 ± 1.29° (group C). Group A and B showed significantly less deviation than groups C and D (P < 0.05). There was no statistically significant difference in all parameters measured between group A and B, as well as between group C and D (P > 0.05). Sleeve-in-sleeve sCAIS demonstrated higher precision than sleeve-on-drill sCAIS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Thrust ripple suppression analysis of moving-magnet-type linear synchronous motor based on independent coil.
- Author
-
Sun, Qinwei, Wang, Mingyi, Liu, Minghong, Zhang, Chengming, and Li, Liyi
- Subjects
- *
ELECTROMAGNETIC forces , *SYNCHRONOUS electric motors , *MAGNETIC pole , *PERMANENT magnets , *POWER resources - Abstract
In this paper, a novel thrust ripple suppression method for multi-secondary permanent magnet synchronous linear motor based on independent coil structure is proposed. Independent coil structure can realize independent power supply for each coil by changing the driving mode of the coils. Combined with the new power supply strategy, the detent and electromagnetic force fluctuation can be suppressed. Firstly, the cogging and end force of moving-magnet-type linear motor are separated by periodic and vector boundary conditions and harmonic analysis is carried out. An analytical model of air gap magnetic field based on virtual magnetic poles is established to solve the back EMF and electromagnetic force fluctuation of the motor. The generation mechanism and harmonic of electromagnetic force fluctuation are analyzed. A multi-secondary PMLSM based on independent coil is proposed, the principle of suppressing motor thrust ripple is expounded, and the coupling effect between modules is analyzed. Finally, a zero-crossing power supply strategy is proposed. The simulation and experimental results show that multi-secondary independent coil PMLSM can effectively suppress the detent and electromagnetic force fluctuation. • The separation model of motor thrust ripple is established, and the harmonic content of thrust fluctuation is analysed. • A multi-secondary motor topology is proposed, which can greatly reduce the no-load detent force of the motor. • An independent coil structure is proposed, which can suppress the thrust ripple of no-load and load at the same time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. On indirect estimation of small area parameters under ranked set sampling.
- Author
-
Ahmed, Shakeel, Albalawi, Olayan, and Shabbir, Javid
- Subjects
SMALL area statistics ,DEMOGRAPHIC surveys ,SAMPLE size (Statistics) ,REGRESSION analysis ,HEALTH surveys - Abstract
In investigations of domains under post-stratified random sampling, it is difficult to get an acceptable precision for domain-specific estimates due to low sample sizes. Small area estimate, a popular technique that has been widely used over the past few decades, involves indirect estimating using the auxiliary data from the entire population. In this article, we utilize a ranked set sampling (RSS) technique to achieve a greater level of precision in area-specific estimations under the assumption that ranking the smaller sets is simple, inexpensive, and flawless. RSS optimizes sample size for a fixed degree of precision or increases precision for a fixed sample size. We create direct estimators for population total under homogeneous, ratio, and regression models that are area specific. To evaluate the effectiveness and application of the suggested RSS technique, data from the Pakistan Demographic Health Survey (PDHS 2017–18) and Iris flower data are used. The effectiveness of the RSS mechanism is supported by both theoretical characteristics and Bootstrapped tests. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Accuracy, precision and diagnostic accuracy of oral thermometry in pediatric patients.
- Author
-
Deligakis, Apostolos, Aretha, Diamanto, Almpani, Eleni, Stefanopoulos, Nikolaos, Salamoura, Maria, and Kiekkas, Panagiotis
- Abstract
To determine the accuracy and precision of oral thermometry in pediatric patients, along with its sensitivity and specificity for detecting fever and hypothermia, with rectal thermometry as reference standard. This method-comparison study enrolled patients aged between 6 and 17 years, admitted to the surgical ward during a 21-month period. KD-2150 and IVAC Temp Plus II were used for oral and rectal temperature measurements respectively. Fever and hypothermia were defined as core temperature ≥38.0 °C and ≤ 35.9 °C respectively. Accuracy and precision of oral thermometry were determined by the Bland-Altman method. Sensitivity, specificity, positive and negative predictive value, and correct classification of oral temperature cutoffs for detecting fever and hypothermia were calculated. Based on power analysis, 100 pediatric patients were enrolled. The mean difference between oral and rectal temperatures was −0.34 °C, with 95 % limits of agreement ranging between −0.52 and −0.16. Sensitivity and specificity of oral thermometry for detecting fever were 0.50 and 1.0 respectively; its sensitivity and specificity for detecting hypothermia were 1.0 and 0.88 respectively. The oral temperature value of 37.6 °C provided excellent sensitivity for detecting fever, while the value of 35.7 °C provided optimal sensitivity and specificity for detecting hypothermia. Oral thermometry had low sensitivity for detecting fever and suboptimal specificity for detecting hypothermia; thus, temperature values <38.0 °C and <36.0 °C cannot exclude fever and confirm hypothermia respectively with high certainty. Diagnostic accuracy of oral thermometry can be improved by the use of oral temperature thresholds <38.0 °C for detecting fever and <35.9 °C for detecting hypothermia. • Oral thermometry underestimated systematically rectal temperatures. • Mean difference ± SD of oral vs. rectal temperatures was −0.34 ± 0.09 °C. • Oral thermometry had low sensitivity for detecting fever. • Oral thermometry had excellent sensitivity for detecting hypothermia. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Improving Transparency in Romanian Public Procurement: Machine Learning to Classify Bidders and Validate Decisions.
- Author
-
Silaschi, Iasmina Oana, Pop, Ioan Daniel, and Coroiu, Adriana Mihaela
- Subjects
MACHINE learning ,GOVERNMENT purchasing ,PUBLIC spaces ,PUBLIC companies ,AUCTIONS - Abstract
The primary objective of this research paper is to conduct a real-life case study, centred around a substantial dataset procured from numerous Romanian public procurement tenders. The study aims to classify the bidders according to their compatibility and suitability, an assessment determined by the accuracy of different machine learning models. The dataset used, is a vast body of information. More precisely, it includes data from a total of 289,472 enterprises, 47,974 contracting authorities and 42,474 tenders. The use of this extensive data set ensures a robust and comprehensive analysis, thus enabling the extraction of meaningful insights and facilitating the drawing of reliable conclusions. In the quest to rank bidders effectively, the performance of nine intelligent algorithms is evaluated and compared. The top-performing algorithms in this context appear to be Decision Trees, along with ensemble methods derived from them, namely the Random Forest Classifier and the Extra Trees Classifier. The outcomes of this study are satisfying, outperforming the metrics of a similar study conducted in the same domain. Given the potential concerns about Romania's integrity in dealing with contracting companies for public procurement auctions, this study holds significant value in its ability to validate the decision-making process employed in previous auctions. By casting light on the effectiveness of past procurement decisions and offering strategic guidance for future acquisitions, this research holds substantial implications for improving transparency and streamlining the bidder selection process within the public procurement area. Therefore, with potential for future work upon it, it promises to become a valuable addition in enhancing the integrity and efficiency of public procurement in Romania. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Data Integration for Improving the Precision of Per Capita Expenditure Estimation Using Machine Learning Method.
- Author
-
Ramanel, Vania Tresa, Nooraeni, Rani, Sumarni, Cucu, and Farhan, Muh
- Subjects
K-nearest neighbor classification ,CONSUMPTION (Economics) ,MACHINE learning ,LEAST squares ,MISSING data (Statistics) - Abstract
The household expenditure variable is crucial because it is a determining indicator that can describe several dimensions, such as economic growth and community welfare. Expenditure variables are estimated from survey data, such as SBH and SUSENAS. These are two types of probability surveys. However, detailed questions and a large sample are needed to obtain a precise estimate of household consumption groups' expenditures. SBH is more detailed than Susenas but has fewer samples and conversely, Susenas has more samples, but respondents' answers are not as detailed as SBH. Combining these two surveys will improve the precision of the estimates produced. However, many expenditures variable data are missing in Susenas, so imputation techniques are needed first. Therefore, we used a machine learning approach with K-nearest neighbor (KNN) regression to overcome this problem in this study. We then integrated the data with the generalized least squared (GLS) approach to produce better estimates. We use DKI Jakarta province, Indonesia data as a case study. The results show that the imputation technique using KNN and these two survey integrations can increase the precision of direct estimates for each expenditure group. The greatest increase in precision was observed in the estimate of the food expenditure group average. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Precision Anterior Cruciate Ligament Reconstruction.
- Author
-
Herman, Zachary J., Kaarre, Janina, Getgood, Alan M.J., and Musahl, Volker
- Abstract
Precision anterior cruciate ligament reconstruction (ACLR) refers to the individualized approach to prerehabilitation, surgery (including anatomy, bony morphology, and repair/reconstruction of concomitant injuries), postrehabilitation, and functional recovery. This individualized approach is poised to revolutionize orthopedic sports medicine, aiming to improve patient outcomes. The purpose of this article is to provide a summary of precision ACLR, from the time of diagnosis to the time of return to play, with additional insight into the future of ACLR. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Enhancing artificial intelligence-doctor collaboration for computer-aided diagnosis in colonoscopy through improved digital literacy.
- Author
-
Mori, Yuichi, Jin, Eun Hyo, and Lee, Dongheon
- Abstract
Establishing appropriate trust and maintaining a balanced reliance on digital resources are vital for accurate optical diagnoses and effective integration of computer-aided diagnosis (CADx) in colonoscopy. Active learning using diverse polyp image datasets can help in developing precise CADx systems. Enhancing doctors' digital literacy and interpreting their results is crucial. Explainable artificial intelligence (AI) addresses opacity, and textual descriptions, along with AI-generated content, deepen the interpretability of AI-based findings by doctors. AI conveying uncertainties and decision confidence aids doctors' acceptance of results. Optimal AI-doctor collaboration requires improving algorithm performance, transparency, addressing uncertainties, and enhancing doctors' optical diagnostic skills. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Evaluation of the precision of the Plasmodium knowlesi growth inhibition assay for Plasmodium vivax Duffy-binding protein-based malaria vaccine development.
- Author
-
Mertens, Jonas E., Rigby, Cassandra A., Bardelli, Martino, Quinkert, Doris, Hou, Mimi M., Diouf, Ababacar, Silk, Sarah E., Chitnis, Chetan E., Minassian, Angela M., Moon, Robert W., Long, Carole A., Draper, Simon J., and Miura, Kazutoyo
- Subjects
- *
MALARIA vaccines , *VACCINE development , *PLASMODIUM , *PLASMODIUM vivax , *MONOCLONAL antibodies , *MALARIA prevention , *CONFIDENCE intervals , *RESEARCH personnel - Abstract
• The Pk GIA will be an essential selection tool for P. vivax vaccine development. • The error of the assay was evaluated with human anti- Pv DBPII antibodies. • Significant assay-to-assay variation was observed. • 95 % confidence interval of inhibition for a given number of PkGIA was determined. Recent data indicate increasing disease burden and importance of Plasmodium vivax (Pv) malaria. A robust assay will be essential for blood-stage Pv vaccine development. Results of the in vitro growth inhibition assay (GIA) with transgenic P. knowlesi (Pk) parasites expressing the Pv Duffy-binding protein region II (Pv DBPII) correlate with in vivo protection in the first Pv DBPII controlled human malaria infection (CHMI) trials, making the Pk GIA an ideal selection tool once the precision of the assay is defined. To determine the precision in percentage of inhibition in GIA (%GIA) and in GIA 50 (antibody concentration that gave 50 %GIA), ten GIAs with transgenic Pk parasites were conducted with four different anti- Pv DBPII human monoclonal antibodies (mAbs) at concentrations of 0.016 to 2 mg/mL, and three GIAs with eighty anti- Pv DBPII human polyclonal antibodies (pAbs) at 10 mg/mL. A significant assay-to-assay variation was observed, and the analysis revealed a standard deviation (SD) of 13.1 in the mAb and 5.94 in the pAb dataset for %GIA, with a LogGIA 50 SD of 0.299 (for mAbs). Moreover, the ninety-five percent confidence interval (95 %CI) for %GIA or GIA 50 in repeat assays was calculated in this investigation. The error range determined in this study will help researchers to compare Pk GIA results from different assays and studies appropriately, thus supporting the development of future blood-stage malaria vaccine candidates, specifically second-generation Pv DBPII-based formulations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Spectral line confocal sensor signal analysis and correction: Unlocking reflectance difference sample measurements.
- Author
-
Wang, Shuai, Diao, Kuan, and Liu, Xiaojun
- Subjects
- *
OPTICAL measurements , *SURFACE topography measurement , *REFLECTANCE , *SPECTRAL lines , *SURFACE topography , *NONNEGATIVE matrices - Abstract
Spectral line confocal imaging (LCI) technology enables high-resolution, rapid 3D surface morphology analysis of transparent materials and other samples, positioning it as a leading optical measurement tool. Nevertheless, where the sensor measures samples with significant variations in surface reflectance, the peak signals of the acquired images exhibit low signal-to-noise ratios or become distorted, hampering the peak extraction algorithm from accurately locating their peak positions. Consequently, it's arduous to reconstruct their precise surface topography. To enhance the sensor's dynamic measurement range and adaptability, this paper introduces an adaptive correction approach. It utilizes a regularized non-negative matrix to decompose images with various grey levels. This is followed by weighted frequency filtering and Gaussian enhancement of all the grey layers. Eventually, these layers are fused into one image with high contrast and uniformity. Using this approach, the sensor is capable of achieving precise 3D reconstruction with high-quality peak signals. Additionally, this study experimentally demonstrates the efficacy of the suggested approach by measuring the interdigital electrode and printed circuit boards (PCB). It should be noted that the technique presented in this paper is not limited to spectral line confocal instruments and is also effective for line laser and line structured light instruments. • A spectral line confocal system for high-speed 3D surface topography measurements was developed and built in-house. • A proposed adaptive signal correction algorithm to address the problem of reflectance difference surface measurement. • Enhances the dynamic measurement range and adaptability of the instrument. • The method is both simple and effective, as it requires only a single measurement followed by processing single image. • The method is also applicable to line laser and line structured light instruments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A method combining active control with passive regulation to enhance the vibration suppression capability of linear motor-driven aerostatic stage.
- Author
-
Xiao, Yu, Yu, Deping, Chen, Dongsheng, and Jiang, Yufeng
- Subjects
- *
PID controllers , *ACCELERATION (Mechanics) , *DYNAMICAL systems , *MEASURING instruments , *FRICTION , *ACTIVE noise control - Abstract
The Linear Motor-Driven Aerostatic Stage (LMDAS) has been extensively applied in high-precision measuring instruments and precision equipment requiring exceptional velocity and acceleration performance. This is attributable to its low friction, high precision, and high dynamic performance. Nevertheless, the LMDAS is vulnerable to external disturbances due to weak damping of the aerostatic guideway's gas film. This weakness directly affects the positioning accuracy and dynamic characteristics of the system and results in its flutter. This paper proposed a method for improving the overall damping of the system and its ability to suppress vibrations by enhancing the damping of the driving direction via combining active control and passive regulation. The active control algorithm based on the active disturbance rejection controller and the passive regulation technology based on the linear motor were presented, respectively. Both numerical and experimental studies demonstrated that this approach was superior to the traditional PID controller, commonly employed in LMDAS, regarding steady-state performance, dynamic response, and vibration suppression. In particular, this method demonstrated a significant improvement in reducing the maximum vibration acceleration of the LMDAS in all three directions, achieving a 50 % reduction compared to the conventional PID controller. [Display omitted] • Enhancing driving direction's damping to improve the vibration suppression ability. • The method combines active control and passive regulation to enhance damping. • The active controller based on active disturbance rejection controller was developed. • The passive regulation technology of electromagnetic damping was proposed. • Method's effectiveness for improving vibration suppression is verified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Comparing repeatability and reproducibility of topographic measurement types directly using linear regression analyses of measured heights.
- Author
-
Peta, Katarzyna, Love, George, and Brown, Christopher A.
- Subjects
- *
REGRESSION analysis , *LINEAR statistical models , *STATISTICAL reliability , *SURFACE topography , *LENGTH measurement - Abstract
This paper describes and illustrates a convenient, new method for checking repeatability and reproducibility by direct comparisons of measured heights. Focus variation, confocal, and interferometric optical areal profiling have been integrated as modes on the Sensofar S neox and are used here. Repeated, sequential measurements are made with these different types of measurement without repositioning the measurand. Several different positions are measured on the same measurand, an electroformed, standard areal surface with an irregular topography. Height (z) measurements at individual locations (x,y) are plotted between repeated measurements at the same position on the measurand, and are called H–H plots. These plots can be used for rapid evaluations of topographic measurement repeatability on ordinary topographic measurements. Exceptionally, such plots can also be used to see how well one type of measurements can reproduce another, such as confocal, focus variation, and interferometric, when they are integrated as modes on the same measurement instrument. Only when different types of measurement are included as modes on the same instrument can different measurement types be compared directly for reproducing each of the large numbers of locations (>105) in topographic measurements at the same position on a measurand. One way of quantifying repeatability and reproducibility is with coefficients of determination (R2) and slopes of linear regression analyses on these H–H plots. • Direct comparison of heights in repeated topographic measurements by linear regression. • Reproducibility between confocal, focus variation and interferometric measurements. • Method for estimating topographic measurement repeatability in actual conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Precise Fabrication of Ocular Inserts Using an Innovative Laser-Driven CaliCut Technology: In Vitro and in Vivo Evaluation.
- Author
-
Rana, Dhwani, Beladiya, Jayesh, Sheth, Devang, Salave, Sagar, Sharma, Amit, Jindal, Anil B., Patel, Rikin, and Benival, Derajram
- Subjects
- *
DRUG delivery systems , *CASTOR oil , *DRY eye syndromes , *MELT spinning , *INTELLIGENT control systems , *HIGH technology - Abstract
Ocular inserts offer distinct advantages, including a preservative-free drug delivery system, the ability to provide tailored drug release, and ease of administration. The present research paper delves into the development of an innovative ocular insert using CaliCut technology. Complementing the hot melt extrusion (HME) process, CaliCut, an advanced technology in ocular insert development, employs precision laser gauging to achieve impeccable cutting of inserts with desired dimensions. Its intelligent control over the stretching process through auto feedback-based belt speed adjustment ensures unparalleled accuracy and consistency in dosage form manufacturing. Dry eye disease (DED) poses a significant challenge to ocular health, necessitating innovative approaches to alleviate its symptoms. In this pursuit, castor oil has emerged as a promising therapeutic agent, offering beneficial effects by increasing the thickness of the lipid layer in the tear film, thus improving tear film stability, and reducing tear evaporation. To harness these advantages, this study focuses on the development and comprehensive characterization of castor oil-based ocular inserts. Additionally, in-vivo irritancy evaluation in rabbits has been undertaken to assess the inserts' safety and biocompatibility. By harnessing the HME and CaliCut techniques in the formulation process, the study demonstrates their instrumental role in facilitating the successful development of ocular inserts. Development of ocular inserts using HME and CaliCut and their characterization. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A fast method for the calculation of refrigerant thermodynamic properties in a refrigeration cycle.
- Author
-
Khoury, Joseph Al, Haddad, Rabih Al, Shakrina, Ghiwa, Malham, Christelle Bou, Sayah, Haytham, Bouallou, Chakib, and Nemer, Maroun
- Subjects
- *
THERMODYNAMICS , *EQUATIONS of state , *DYNAMICAL systems , *REFRIGERANTS , *REFRIGERATION & refrigerating machinery , *DYNAMIC simulation - Abstract
• The coolprop thermodynamic equations of state and the implicit fitting method have been investigated for calculating the refrigerants' thermodynamic properties. • A dynamic cascade refrigeration cycle has been developed and simulated with both methods for comparison. • The simulation was 30 times faster with the implicit fitting method than coolprop equations of state. • The implicit fitting method showed high accuracy. Modeling and simulating dynamic and complex energy systems that are composed of a large number of components where every component model is described by mathematical equations can take high computational time. The calculation of the refrigerant thermodynamic properties has a big impact on the computational speed of the model. In an attempt to speed up simulations, two implementation methods for the calculation of refrigerants' thermodynamic properties were investigated, the Coolprop thermodynamic equations of state and the Implicit fitting method. A complex dynamic cascade refrigeration cycle was developed and simulated using the two fluids implementation methods. The advantages and disadvantages of each method were discussed. The results show that using the implicit fitting method instead of Coolprop for the simulations of complex dynamic systems in Dymola will accelerate the simulation by about 30 times. In addition, the implicit method is accurate and offers more flexibility for modeling complex realistic energy systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Accuracy comparison of scan segmental sequential ranges with two intraoral scanners for maxilla and mandible.
- Author
-
Liu, Chih-Te, Chen, Jen-Hao, Du, Je-Kang, Hung, Chun-Cheng, and Lan, Ting-Hsun
- Subjects
MAXILLA ,MANDIBLE ,SCANNING systems ,ONE-way analysis of variance - Abstract
The accuracy of a full-arch scan by using an intraoral scanner should be validated under clinical conditions. This study aimed to compare the accuracy of full-arch digital impressions in the maxilla and mandible using two intra oral scanners with three different scan segmental sequential ranges. A dental model with 28 teeth in their normal positions served as the reference. Sixty full-arch scans were performed using Trios 3 and Trios 4, employing scanning strategy O (manufacturer's original method), OH (segmental sequential ranges one half), and TQ (segmental sequential ranges third quarter). Trueness was evaluated by comparing digital impressions with a reference dataset using specialized software. One-way ANOVA and Tukey tests assessed differences between the groups. For Trios 3 on the maxilla, no significant difference was found among the groups of trueness; in the mandible, strategy O exhibited a significant difference (P = 0.008) with the highest deviation. For Trios 4 on the maxilla, strategy TQ demonstrated the lowest deviation with a significant difference (P = 0.006); in the mandible, no significant difference was found among the groups of trueness. Strategy TQ exhibited the best trueness for Trios 3 and Trios 4, suggesting it may be preferred for higher accuracy. Clinicians should consider these findings when selecting scanning strategies and intraoral scanners for specific cases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Elemental analysis and quantification method development and validation of stigmasterol in Malaxis acuminata-A vanishing orchid.
- Author
-
Arora, Mamta
- Subjects
- *
TRACE elements , *ELEMENTAL analysis , *WAVELENGTH dispersive X-ray spectroscopy , *COPPER , *BIOACTIVE compounds , *THIN layer chromatography - Abstract
Orchids are highly valued for their therapeutic properties. Malaxis acuminata (vernacular name: Jeevak), is a vanishing orchid that has mystical healing potential. Wavelength dispersive X-ray fluorescence (WD-XRF) spectroscopy was employed to conduct elemental analysis of this orchid plant. Elements Potassium (K), Calcium (Ca), Phosphorus (P), Magnesium (Mg), Silicon (Si), Sulphur (S), Chlorine (Cl), Aluminium (Al), Iron (Fe), Ruthenium (Ru), Manganese (Mn), Zinc (Zn), Rubidium (Rb), Titanium (Ti), Copper (Cu), Strontium (Sr), and Nickel (Ni) are documented quantitatively and qualitatively. These elements are pivotal in shaping the plant's medicinal potential. Likewise, a multitude of organic molecules within the plant exhibit therapeutic potential. Stigmasterol is one such molecule, esteemed for its remarkable properties and a spectrum of health benefits. The present investigation additionally encompassed the establishment and validation of a methodology for the precise quantification of stigmasterol within the petroleum ether extract, utilizing high-performance thin-layer chromatography (HPTLC). The bioactive compound was isolated through the utilization of a specific mobile phase of toluene: ethyl acetate: methanol (8:1.5:0.5, v/v/v), and an optimal wavelength of 580 nanometers was ascertained. Validation of this method was conducted by the guidelines established by the International Council for Harmonisation (ICH). The calibration range for the stigmasterol was established to span from 400 to 3600 micrograms per spot (μg/spot). The Rf (retention factor) value for the separation of stigmasterol was determined to be 0.56 ± 0.04. The limit of detection (LOD) and limit of quantification (LOQ) were established at 15.954 nanograms per spot (ng/spot) and 48.344 ng/spot, respectively. Linearity (r2 0.999), accuracy (98.14– 99.21 %), and RSD of precision (1.953–2.984). specificity and robustness were evaluated by ICH guidelines. The stigmasterol content was 634.6 ± 3.28 μg/ml, which demonstrates promising results. This method exhibits substantial applicability in pharmaceuticals, nutraceuticals, food, and health-related sectors. [Display omitted] • Potassium, Calcium, Phosphorus, Magnesium, Silicone, Sulphur, Chlorine, Aluminium, Iron, Ruthenium, Manganese, Zinc, Rubidium, Titanium, Copper, Strontium, and Nickel were identified in M. acuminata through wavelength dispersive X-ray fluorescence spectroscopy (WD-XRF). • Method development and validation (ICH) were performed to estimate the concentration of stigmasterol using High-Performance Thin Layer Chromatography (HPTLC). • The calibration range for the bioactive compound was determined to be 400–3600 μg/spot, RF value was 0.56 ± 0.04, LOD and LOQ were found to be 15.954 ng/spot and 48.344 ng/spot, respectively. • The developed HPTLC method exhibited excellent linearity (r2 = 0.999), accuracy (98.14–99.21 %), and precision (RSD = 1.953–2.984). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Personalized redox biology: Designs and concepts.
- Author
-
Margaritelis, Nikos V.
- Subjects
- *
OXIDATION-reduction reaction , *HUMAN physiology , *BIOLOGY , *RESEARCH personnel , *CLINICAL medicine , *NUTRITIONAL genomics - Abstract
Personalized interventions are regarded as a next-generation approach in almost all fields of biomedicine, such as clinical medicine, exercise, nutrition and pharmacology. At the same time, an increasing body of evidence indicates that redox processes regulate, at least in part, multiple aspects of human physiology and pathology. As a result, the idea of applying personalized redox treatments to improve their efficacy has gained popularity among researchers in recent years. The aim of the present primer-style review was to highlight some crucial yet underappreciated methodological, statistical, and interpretative concepts within the redox biology literature, while also providing a physiology-oriented perspective on personalized redox biology. The topics addressed are: (i) the critical issue of investigating the potential existence of inter-individual variability; (ii) the importance of distinguishing a genuine and consistent response of a subject from a chance finding; (iii) the challenge of accurately quantifying the effect of a redox treatment when dealing with 'extreme' groups due to mathematical coupling and regression to the mean; and (iv) research designs and analyses that have been implemented in other fields, and can be reframed and exploited in a redox biology context. [Display omitted] • Personalized treatments appear to be the way ahead in translational biology. • Personalized redox biology presumes that a wide inter-individual variability exists. • No hard evidence exists supporting the existence of redox interindividual variability. • Methodological and statistical concepts are presented in relation to personalization. • The feasibility and applicability of personalized redox treatments are still unknown. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Evaluation of lower limb and pelvic marker placement precision among different evaluators and its impact on gait kinematics computed with the Conventional Gait Model.
- Author
-
Fonseca, Mickael, Gasparutto, Xavier, Grouvel, Gautier, Bonnefoy-Mazure, Alice, Dumas, Raphaël, and Armand, Stéphane
- Subjects
- *
LEG , *GAIT disorders , *ANALYSIS of variance , *STANDARD deviations , *PEARSON correlation (Statistics) - Abstract
Gait analysis relies on the accurate and precise identification of anatomical landmarks to provide reliable and reproducible data. More specifically, the precision of marker placement among repeated measurements is responsible for increased variability in the output gait data. The objective of this study was to quantify the precision of marker placement on the lower limbs by a test-retest procedure and to investigate its propagation to kinematic data. The protocol was tested on a cohort of eight asymptomatic adults involving four evaluators, with different levels of experience. Each evaluator performed, three repeated marker placements for each participant. The standard deviation was used to calculate the precision of the marker placement, the precision of the orientation of the anatomical (segment) coordinate systems, and the precision of the lower limb kinematics. In addition, one-way ANOVA was used to compare the intra-evaluator marker placement precision and kinematic precisions among the different levels of the evaluator's experience. Finally, a Pearson correlation between marker placement precision and kinematic precision was analyzed. Results have shown a precision of skin markers within 10 mm and 12 mm for intra-evaluator and inter-evaluator, respectively. Analysis of kinematic data showed good to moderate reliability for all parameters apart from hip and knee rotation that demonstrated poor intra- and inter-evaluator precision. Inter-trial variability was observed reduced than intra- and inter-evaluator variability. Moreover, experience had a positive impact on kinematic reliability since evaluators with higher experience showed a statistically significant increase in precision for most kinematic parameters. However, no correlation was observed between marker placement precision and kinematic precision which indicates that an error in the placement of one specific marker can be compensated or enhanced, in a non-linear way, by an error in the placement of other markers. • Evaluator's experience plays positive role on some kinematic parameters of gait. • No correlation between marker placement precision and kinematic variability. • Wands and femoral epicondyle markers are the lowest precise in marker placement. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Basic measurement concepts.
- Author
-
Wild, Christopher and Zin, Nico
- Abstract
Accurate monitoring and measurement of physiological parameters is fundamental in modern anaesthetic practice. The physical parameter being measured is known as the measurand. A measurement system can be thought of as a 'black box' whereby an input is processed, yielding an output. Components of a system include a sensor, transducer, signal conditioning unit, and display. A solid understanding of these devices and their strengths, limitations and sources of error is fundamental in the delivery of safe and appropriate care. Signal conditioning involves reducing background noise, amplification and filtering of the signal, and analogue-to-digital conversion. The output of the measurement system should accurately reflect the parameter being measured. Performance of a measurement system can be characterised by its static and dynamic characteristics, including accuracy, precision, sensitivity and linearity. Clinical measurement systems behave like first-order or second-order dynamic systems. They are subject to drift and hysteresis, necessitating regular calibration. Error may be inherent to the equipment itself or user misinterpretation. This article outlines the basic principles of these measurement devices. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Accuracy of intraoral scanning methods for maxillary Kennedy class I arch.
- Author
-
Chang, I-Ching, Hung, Chun-Cheng, Du, Je-Kang, Liu, Chih-Te, Lai, Pei-Ling, and Lan, Ting-Hsun
- Subjects
REMOVABLE partial dentures ,COMPUTER-aided design software ,DIGITAL dental impression systems ,ONE-way analysis of variance ,STANDARD language - Abstract
The optimal strategy for scanning removable partial dentures remains unknown. This study investigated scanning strategies for patients with a maxillary Kennedy Class I arch as well as the measurement deviations of three scanning strategies. A standard maxilla model was positioned with a holder in a dental chair to simulate a natural patient position and posture. Standard Tessellation Language files for reference models were formatted with a desktop scanner, and model operation files were obtained with a TRIOS 3 Pod intraoral scanner and superimposed using Exocad computer-aided design software. The three scanning strategies evaluated in this study (Strategy M, T-R, and R-T) were used for nine scans each, and the resulting data were recorded. The deviation of the three strategies was statistically analyzed through one-way ANOVA and Tukey post hoc testing. The trueness of Strategy M, T-R, and R-T was 52.6 ± 31.0, 54.9 ± 27.6, and 50.1 ± 22.3 μm, respectively. No statistically significant differences in trueness were detected among the three groups (P > 0.05). However, Strategy T-R had the evenest distribution of all measuring points. The deviations of the measurements obtained by three scanning strategies were mostly between 30 and 70 μm. The precision of the three strategies was similar as well. Trueness did not differ significantly among the three strategies. However, Strategy T-R is recommended for use with a TRIOS 3 Pod scanner because of its reduction of the seesaw effect and high stabilization of the RPD framework. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Position accuracy criteria for planar flexural hinges.
- Author
-
Verotti, M., Serafino, S., and Fanghella, P.
- Subjects
- *
COMPLIANT mechanisms , *HINGES , *FLEXURE , *INFLECTION (Grammar) - Abstract
With the increasing implementation of compliant mechanisms in high-precision and high-accuracy applications, the need to evaluate the positioning performance of the flexure hinges becomes evident. In this paper, the determination of the accuracy of planar flexures is addressed by analyzing and comparing the available criteria presented in literature, including a new criterion based on the pole of the displacements. For uniform flexures, an analytical formulation is developed for end-moment loads, whereas complex loading conditions, resulting in an inflection point, are analyzed and numerically evaluated. The accuracy criteria are also applied for analyzing the positioning performance of the cross-axis flexural pivot. Various relations among the different criteria are determined, and their limitations, such as the non-bijective correspondence with the deformed configurations, are discussed. The criteria are applied to the design of a high-accuracy cross-axis pivot. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Controller design for high-speed, ultra-precision positioning of a linear motion stage on a vibrating machine base stage control on a vibrating base.
- Author
-
Sato, Kaiji, Hisamatsu, Ryouhei, and Akamatsu, Kaoru
- Subjects
- *
BANDPASS filters , *MACHINERY , *SYSTEMS design , *DYNAMIC models , *ULTRASONIC transducers - Abstract
This paper presents a high-speed, ultra-precision point-to-point position control system design. It addresses precision stages with unknown characteristics, affected by machine base vibration and system effectiveness. The design is based on the nominal characteristic trajectory following (NCTF) control method, which can provide ultra-precision position control performance for the precision stages without accurate dynamic models. However, conventional NCTF control systems neither provide sufficient vibration suppression nor exhibit suitable vibration suppression characteristics. The position control performance is deteriorated by the machine base vibration. To overcome this problem, a procedure for incorporating a bandpass filter and derivative compensator in the NCTF control system is proposed and designed. These compensators exhibit high vibration suppression ability when combined, rather than when used individually. Additionally, they can be easily designed without any accurate dynamic model. The effectiveness of the compensator combination was first investigated using a linear model and subsequently experimentally verified. Using the proposed procedure, a control system was designed for the precision stage with friction characteristics on the vibrating machine base. The designed control system suppresses the residual vibration immediately after the target value becomes constant, and the error converges below 50 nm within 80 ms. • Position accuracy deterioration owing to vibration of machine base resolved. • Control system proposed for high-speed ultra-precision precision stage positioning. • Control of relative displacement of the stage with respect to machine base enabled. • Efficiency of the proposed two types of vibration suppression compensators proven. • Excellent performance of the control system demonstrated successfully. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Soft precision and recall.
- Author
-
Fränti, Pasi and Mariescu-Istodor, Radu
- Subjects
- *
PATTERN recognition systems , *SOFT sets , *MACHINE learning - Abstract
• Soft variants of precision and recall introduced. • Set notation with soft cardinality is applied. • Application 1: evaluation of keyword extraction. • Application 2: evaluation of segmentation result. Precision and recall are classical measures used in machine learning. However, they are based on exact matching. This results in binary classification where the predicted item is either a true or false positive despite inexact matching is often preferred in pattern recognition. To address this problem, we introduce soft variants of precision and recall based on application-specific similarity measure. 2022 Elsevier Ltd. All rights reserved. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. The ultrafast burst laser ablation of metals: Speed and quality come together.
- Author
-
Žemaitis, Andrius, Gudauskytė, Ugnė, Steponavičiūtė, Saulė, Gečys, Paulius, and Gedvilas, Mindaugas
- Subjects
- *
HIGH power lasers , *INDUSTRIAL metals , *MANUFACTURING processes , *LASER ablation , *INDUSTRIAL lasers , *FEMTOSECOND lasers , *COPPER - Abstract
[Display omitted] • Ultrafast bursts of pulses enable all-in-one highly efficient, rapid, high-quality, high-resolution laser micro-machining. • The material removal efficiency and rate were increased by 18.0 %, 44.5 %, and 37.0 % for Al, Cu, and stainless steel by bursts. • Fast fabrication of 3D macrostructures with micro-features was achieved via laser ablation. • Precise high-power femtosecond laser ablation is possible due to pulse division in time. Utilisation of high-power ultrafast laser for ablation-based industrial processes such as milling, drilling or cutting requires high production rates and superior quality. In this paper, we demonstrate highly efficient, rapid and high-quality laser micro-machining of three industrial metals (aluminium, copper, and stainless steel). Our proposed optimisation methods of pulse energy division in time result in simultaneous enhancement of ablation efficiency (volume per energy) and ablation rate (volume per time) while maintaining a focused laser beam on the target surface and high resolution. A high-tech femtosecond burst laser, producing laser pulses of τ = 350 fs duration and intra-burst repetition rates of f P = 50 MHz, was employed in the experiments. Due to the utilisation of bursts, material removal efficiency and removal rate were increased by 18.0 %, 44.5 %, and 37.0 % for aluminium, copper, and stainless steel if compared with the best performance of single-pulses. In addition to the high processing rate, processing by burst mode resulted in lower surface roughness. This technique is believed to be a solution enabling extremely high femtosecond laser powers for precise microfabrication. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
27. Deep-coupling neural network and genetic algorithm based on Sobol-PR for reactor lightweight optimization.
- Author
-
Pan, Qingquan, Zheng, Songchuan, and Liu, Xiaojing
- Subjects
GENETIC algorithms ,ARTIFICIAL intelligence ,ACHIEVEMENT - Abstract
We propose a deep-coupling neural network and genetic algorithm method based on Sobol-PR method for reactor lightweight shielding optimization. The Sobol method is first used to analyze the sensitivities between inputs and outputs of the neural network, and then these sensitivities are used to adjust the fitness function of the genetic algorithm dynamically. Meanwhile, two indicators (precision and recall rate) are introduced to facilitate the sample evaluation and selection, where the precision quantifies the prediction ability of the neural network, and the recall rate quantifies the optimization efficiency of the genetic algorithm. The deep coupling between the neural network and the genetic algorithm based on Sobol-PR method contributes to an integrated framework of "calculation-optimization-reconstruction-evaluation," which is applied to the lightweight shielding design of a small helium-xenon-cooled reactor. It is found that the performance of the neural network and the genetic algorithm is improved, with the precision of the neural network reaches up to 99 % and the recall rate of the genetic algorithm reaches up to 84 %. Compared with the traditional method, the new method improves the ratio of ideal solutions by up to 3.8 times and the optimization depth by up to 3.2 times. • Introduction of artificial intelligence algorithms into reactor optimization. • Establishment of the Sobol-PR framework for the first time. • Achievement of the deep-coupling between ANN and GA. • Improvement of the performance of ANN and GA is up to 3.8 times. • Completion of the lightweight shielding design of the helium-xenon-cooled reactor. • An integrated framework of "Calculation-Reconstruction-Evaluation" is proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Optimal two-time point longitudinal models for estimating individual-level change: Asymptotic insights and practical implications.
- Author
-
Brandmaier, Andreas M., Lindenberger, Ulman, and McCormick, Ethan M.
- Abstract
Based on findings from a simulation study, Parsons and McCormick (2024) argued that growth models with exactly two time points are poorly-suited to model individual differences in linear slopes in developmental studies. Their argument is based on an empirical investigation of the increase in precision to measure individual differences in linear slopes if studies are progressively extended by adding an extra measurement occasion after one unit of time (e.g., year) has passed. They concluded that two-time point models are inadequate to reliably model change at the individual level and that these models should focus on group-level effects. Here, we show that these limitations can be addressed by deconfounding the influence of study duration and the influence of adding an extra measurement occasion on precision to estimate individual differences in linear slopes. We use asymptotic results to gauge and compare precision of linear change models representing different study designs, and show that it is primarily the longer time span that increases precision, not the extra waves. Further, we show how the asymptotic results can be used to also consider irregularly spaced intervals as well as planned and unplanned missing data. In conclusion, we like to stress that true linear change can indeed be captured well with only two time points if careful study design planning is applied before running a study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Some limitations of the concordance correlation coefficient to characterise model accuracy.
- Author
-
Wadoux, Alexandre M.J.-C. and Minasny, Budiman
- Subjects
PEARSON correlation (Statistics) ,STATISTICAL correlation ,MODEL validation - Abstract
Perusal of the environmental modelling literature reveals that the Lin's concordance correlation coefficient is a popular validation statistic to characterise model or map quality. In this communication, we illustrate with synthetic examples three undesirable statistical properties of this coefficient. We argue that ignorance of these properties have led to a frequent misuse of this coefficient in modelling and mapping studies. The stand-alone use of the concordance correlation coefficient is insufficient because i) it does not inform on the relative contribution of bias and correlation, ii) the values cannot be compared across different datasets or studies and iii) it is prone to the same problems as other linear correlation statistics. The concordance coefficient was, in fact, thought initially for evaluating reproducibility studies over repeated trials of the same variable, not for characterising model accuracy. For the validation of models and maps, we recommend calculating statistics that, combined with the concordance correlation coefficient, represent various aspects of the model or map quality, which can be visualised together in a single figure with a Taylor or solar diagram. • The concordance correlation coefficient is a popular validation statistic. • It is frequently misused as a single index for the validation of models or maps. • Three illustrations with synthetic datasets show the limitations of this coefficient. • That the concordance correlation coefficient cannot be used to compare different studies. • The stand-alone use of this coefficient is not recommended. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Test–Retest Reliability and Precision of the Rotterdam Intrinsic Hand Myometer.
- Author
-
McGee, Corey W., Burbach, Karin, and McIlrath, Samantha
- Abstract
The purpose of this study was to determine the test–retest reliability and precision of Rotterdam Intrinsic Hand Myometer (RIHM) in healthy adults. Twenty-nine participants originally recruited via convenience sampling at a Midwestern state fair returned approximately 8 days later for retest. An average of three trials for each of the five intrinsic hand strength measurements were collected using the same technique that was used during initial testing. Test–retest reliability was assessed using the intraclass correlation coefficient or ICC (2,3) and precision was evaluated using the standard error of measurement (SEM), and the minimal detectable change (MDC 90)/MDC%. Across all measures of intrinsic strength, the RIHM and its standardized procedures had excellent test–retest reliability. Index finger metacarpophalangeal flexion demonstrated the lowest reliability, and right small finger abduction, left thumb carpometacarpal abduction, and index finger metacarpophalangeal abduction tests had the highest reliability. Precision, as evidenced by SEM and MDC values, was excellent for tests of left index and bilateral small finger abduction strength and acceptable for all other measurements. Test–retest reliability and precision of RIHM across all measurements was excellent. These findings indicate that RIHM is a reliable and precise tool in measuring intrinsic strength of hands of healthy adults, although further research is needed in clinical populations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Learning cost action planning models with perfect precision via constraint propagation.
- Author
-
Garrido, Antonio
- Subjects
- *
CONSTRAINT programming , *ARTIFICIAL intelligence , *ACTIVE learning - Abstract
Data-driven AI is rapidly gaining importance. In the context of AI planning, a constraint programming formulation for learning action models in a data-driven fashion is proposed. Data comprises plan observations, which are automatically transformed into a set of planning constraints which need to be satisfied. The formulation captures the essence of the action model and unifies functionalities that are individually supported by other learning approaches, such as costs, noise/uncertainty on actions, information on intermediate state observations and mutex reasoning. Reliability is a key concern in data-driven learning, but existing approaches usually learn action models that can be imprecise, where imprecision here is an error indicator of learning something incorrect. On the contrary, the proposed approach guarantees reliability in terms of perfect precision by using constraint propagation. This means that what is learned is 100% correct (i.e., error-free), not only for the initial observations, but also for future observations. To our knowledge, this is a novelty in action model learning literature. Although perfect precision might potentially limit the amount of learned information, the exhaustive experiments over 20 planning domains show that such amount is comparable, and even better, to ARMS and FAMA , two state-of-the-art benchmarks in action model learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Hybrid tool combining stiff and elastic grinding.
- Author
-
Pratap, Ashwani, Yamato, Shuntaro, and Beaucamp, Anthony
- Subjects
GLASS ,GRINDING machines ,GRINDING wheels ,MACHINING - Abstract
The fabrication process chain for optically smooth surfaces tends to include several time-consuming grinding and polishing steps. To reduce process time, a hybrid tool is proposed in which a stiff grinding tool and a shape adaptive grinding (SAG) tool are fused together, by taking advantage of the elastic nature of SAG tools. The material removal achieved by the hybrid tool is equivalent to discretely using the stiff grinding and SAG tools in sequence. Under similar processing conditions, a smooth surface of ∼0.02 μm Ra can be obtained on BK7 glass with the proposed tool, instead of ∼0.2 μm Ra with the stiff grinding tool. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Investigation on square wave and cyclic voltammetry approaches of the Pb2+, Cd2+, Co2+ and Hg2+ in tap water of Beni Mellal City (Morocco).
- Author
-
Laghlimi, Charaf, Moutcine, Abdelaziz, Elamrani, Morad, Chtaini, Abdelilah, Isaad, Jalal, Belkhanchi, Hamza, and Ziat, Younes
- Subjects
SQUARE waves ,CYCLIC voltammetry ,DRINKING water ,LEAD ,CHEMICAL formulas ,CADMIUM ,ANALYSIS of heavy metals ,MERCURY - Abstract
A sensitive sensor has been prepared to detect and quantify electrochemically Pb
2+ , Cu2+ and Co2+ in drinking water. The objective of this work is to qualify the trueness and preciseness of two electrochemical methods which are the square wave (SWV) and the cyclic voltammetry (CV) with regard to the detection of lead, copper and cobalt ions in tap water. The electrode used for this purpose is modified with an organic molecule EDTA (ethylenediaminetetraacetic acid) of chemical formula (C10 H16 N2 O8 ) that we added to a specific amount of graphite carbon powder (CP). The formed paste is introduced into a cylindrical plastic cavity. This formed entity is attached to a carbon graphite rod to ensure the passage of the current. The detection limit and quantification limit of carbon-paste electrode CPE-1% EDTA for the reduction peak (CVPic-red ) are the lowest, they are respectively 9.31 × 10–10 mM and 3.1 × 10–9 mM. The coefficient of variation (CV*) and repeatability uncertainty for measurements made by the SWV are lower compared to the CV. The SWV gives more accurate but not correct results while the CVPic-red gives correct but not accurate results. For all studied metals, the method is linear in the concentration range (0.6–2.1 mM), except for mercury it is linear in the range (0.3–2.1 mM). Cadmium has a low systematic error (SE = 0.009 mA/cm2 ) followed by lead (0.013 mA/cm2 ) then mercury. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
34. Pragmatic evaluation of data mining models based on quality assessment & metric analysis.
- Author
-
Ghongade, Trupti G. and Khobragade, R.N.
- Subjects
DATA mining ,DEEP learning ,CYBER physical systems ,DATA modeling ,COMPUTATIONAL complexity ,SCALABILITY - Abstract
Data mining is the hearth of most modern day cyber physical deployments. Due to such large-scale use cases, selection of these models is of primary importance while developing cyber physical systems. But mining models are highly variant in terms of their internal performance characteristics, which makes it difficult for researchers to identify optimum models for their application-specific & performance-specific deployments. Moreover, these data mining models also vary in terms of their internal qualitative & quantitative performance measures. These measures include, precision, accuracy, recall, sensitivity, deployment cost, computational complexity, scalability, etc. Due to such a wide variation in performance, it is ambiguous for researchers to identify optimum models for their application deployments. To reduce this ambiguity, a comparison of these models in terms of their performance-level nuances, function advantages, deployment-specific limitations, and context-specific future scopes is described in this text. Researchers will be able to identify optimum models for their functional-specific deployments. It was observed that Neural Network (NN) based models including Convolutional NNs, Region based NNs, Recurrent NNs, etc. showcased better functional characteristics when compared with linear mining models for large-scale use cases. To further simplify model selection, this text compared the underlying models in terms of their performance metrics including accuracy, complexity, scalability, etc. Depending on this performance-specific evaluation, it was observed that bioinspired models when combined with deep learning techniques can outperform existing models for multiple application scenarios. This will further allow readers to identify optimum models based on their performance-specific characteristics. This text also evaluates a novel Mining Scalability Metric (MSM), which combines primary & secondary performance measures, and assists in identification of mining techniques that have higher accuracy, with lower complexity, and faster response, thereby reducing the ambiguity of model selection process. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Next-generation smart watches to estimate whole-body composition using bioimpedance analysis: accuracy and precision in a diverse, multiethnic sample.
- Author
-
Bennett, Jonathan P, Liu, Yong En, Kelly, Nisa N, Quon, Brandon K, Wong, Michael C, McCarthy, Cassidy, Heymsfield, Steven B, and Shepherd, John A
- Subjects
BODY composition ,PHOTON absorptiometry ,STATISTICAL reliability ,LEAN body mass ,WEARABLE technology ,BIOELECTRIC impedance - Abstract
Background Novel advancements in wearable technologies include continuous measurement of body composition via smart watches. The accuracy and stability of these devices are unknown. Objectives This study evaluated smart watches with integrated bioelectrical impedance analysis (BIA) sensors for their ability to measure and monitor changes in body composition. Methods Participants recruited across BMIs received duplicate body composition measures using 2 wearable bioelectrical impedance analysis (W-BIA) model smart watches in sitting and standing positions, and multiple versions of each watch were used to evaluate inter- and intramodel precision. Duplicate laboratory-grade octapolar bioelectrical impedance analysis (8-BIA) and criterion DXA scans were acquired to compare estimates between the watches and laboratory methods. Test-retest precision and least significant changes assessed the ability to monitor changes in body composition. Results Of 109 participants recruited, 75 subjects completed the full manufacturer-recommended protocol. No significant differences were observed between W-BIA watches in position or between watch models. Significant fat-free mass (FFM) differences (P < 0.05) were observed between both W-BIA and 8-BIA when compared to DXA, though the systematic biases to the criterion were correctable. No significant difference was observed between the W-BIA and the laboratory-grade BIA technology for FFM (55.3 ± 14.5 kg for W-BIA versus 56.0 ± 13.8 kg for 8-BIA; P > 0.05; Lin's concordance correlation coefficient = 0.97). FFM was less precise on the watches than DXA {CV, 0.7% [root mean square error (RMSE) = 0.4 kg] versus 1.3% (RMSE = 0.7 kg) for W-BIA}, requiring more repeat measures to equal the same confidence in body composition changes over time as DXA. Conclusions After systematic correction, smart-watch BIA devices are capable of stable, reliable, and accurate body composition measurements, with precision comparable to but lower than that of laboratory measures. These devices allow for measurement in environments not accessible to laboratory systems, such as homes, training centers, and geographically remote locations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Leveraging auxiliary data to improve precision in inverse probability-weighted analyses.
- Author
-
Zalla, Lauren C., Yang, Jeff Y., Edwards, Jessie K., and Cole, Stephen R.
- Subjects
- *
HEALTH & Nutrition Examination Survey , *STATISTICAL sampling , *STATISTICAL bootstrapping , *MISSING data (Statistics) - Abstract
Purpose: To demonstrate improvements in the precision of inverse probability-weighted estimators by use of auxiliary variables, i.e., determinants of the outcome that are independent of treatment, missingness or selection.Methods: First with simulated data, and then with public data from the National Health and Nutrition Examination Survey (NHANES), we estimated the mean of a continuous outcome using inverse probability weights to account for informative missingness. We assessed gains in precision resulting from the inclusion of auxiliary variables in the model for the weights. We compared the performance of robust and nonparametric bootstrap variance estimators in this setting.Results: We found that the inclusion of auxiliary variables reduced the empirical variance of inverse probability-weighted estimators. However, that reduction was not captured in standard errors computed using the robust variance estimator, which is widely used in weighted analyses due to the non-independence of weighted observations. In contrast, a nonparametric bootstrap estimator properly captured the precision gain.Conclusions: Epidemiologists can leverage auxiliary data to improve the precision of weighted estimators by using bootstrap variance estimation, or a closed-form variance estimator that properly accounts for the estimation of the weights, in place of the standard robust variance estimator. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
37. Effect of the Segmentation Threshold on Computed Tomography–Based Reconstruction of Skull Bones with Reference Optical Three-Dimensional Scanning.
- Author
-
Singh, Ramandeep, Singh, Rajdeep, Baby, Britty, and Suri, Ashish
- Subjects
- *
SKULL , *COMPUTED tomography , *THREE-dimensional modeling , *DIAGNOSTIC imaging - Abstract
A variety of applications related to neurosurgical procedures, education, and training require accurate reconstruction of the involved structures from the medical images such as computed tomography (CT). This study evaluates the quality of CT-based reconstruction of dry skull bones for advanced neurosurgical applications. The accuracy and precision of these models were examined with reference optical scanning. Three consecutive CT and optical scans of different skull bones were acquired and used to develop three-dimensional models. The accuracy of three-dimensional models was examined by manual inspection of the defined anatomical landmarks of the skull. Reproducibility was examined by deviation analysis of the models developed from repeated CT and optical scans. Precision was excellent in both the techniques with less than 0.1 mm deviation error. On the interscan evaluation of the CT versus optical scan model, deviations of more than 0.1 mm were observed in 16 out of 21 instances. CT reconstruction using standard segmentation algorithms results in missing bone portions while using the default bone segmentation threshold. The segmentation threshold was varied to construct missing bone regions, and its effect on the iso-surface generation was evaluated. The threshold variation led to increased mean deviations of surfaces up to 0.6 mm. The study reveals that bone structure, complexity, and segmentation threshold lead to CT reconstruction variability. The trade-off between the desirable model and accepted mean deviation should be considered as per traits of the desired application. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. MRI in the Assessment of TMJ-Arthritis in Children with JIA; Repeatability of a Newly Devised Scoring System.
- Author
-
Angenete, Oskar W., Augdal, Thomas A., Rygg, Marite, and Rosendahl, Karen
- Abstract
Rationale and Objectives: The temporomandibular joint (TMJ) is commonly involved in children with juvenile idiopathic arthritis. The diagnosis and evaluation of the disease progression is dependent on medical imaging. The precision of this imaging is under debate. Several scoring systems have been proposed but transparent testing of the precision of the constituents of the scoring systems is lacking. The present study aims to test the precision of 25 imaging features based on magnetic resonance imaging (MRI).Materials and Methods: Clinical data and imaging were obtained from the Norwegian juvenile idiopathic arthritis study, The NorJIA study. Twenty-five imaging features of the TMJ in MRI datasets from 86 study participants were evaluated by two experienced radiologists for inter- and intraobserver agreement. Agreement of ordinal variables was measured with Cohen´s linear or weighted Kappa as appropriate. Agreement of continuous measurements was assessed with 95% limit of agreement according to Bland-Altman.Results: In the osteochondral domain, the ordinal imaging variables "loss of condylar volume," "condylar shape," "condylar irregularities," "shape of the eminence/fossa," "disk abnormalities," and "condylar inclination" showed inter- and intraobserver agreement above Kappa 0.5. In the inflammatory domain, the ordinal imaging variables "joint fluid," "overall impression of inflammation," "synovial enhancement" and "bone marrow oedema" showed inter- and intraobserver agreement above Kappa 0.5. Continuous measurements performed poorly with wide limits of agreement.Conclusion: A precise MRI-based scoring system for assessment of TMJ in JIA is proposed consisting of seven variables in the osteochondral domain and four variables in the inflammatory domain. Further testing of the clinical validity of the variables is needed. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
39. Personalized brain stimulation of memory networks.
- Author
-
Cash, Robin F.H., Hendrikse, Joshua, Fernando, Kavisha B, Thompson, Sarah, Suo, Chao, Fornito, Alex, Yücel, Murat, Rogasch, Nigel C., Zalesky, Andrew, and Coxon, James P.
- Abstract
The finding that transcranial magnetic stimulation (TMS) can enhance memory performance via stimulation of parietal sites within the Cortical-Hippocampal Network counts as one of the most exciting findings in this field in the past decade. However, the first independent effort aiming to fully replicate this finding found no discernible influence of TMS on memory performance. We examined whether this might relate to interindividual spatial variation in brain connectivity architecture, and the capacity of personalisation methodologies to overcome the noise inherent across independent scanners and cohorts. We implemented recently detailed personalisation methodology to retrospectively compute individual-specific parietal targets and then examined relation to TMS outcomes. Closer proximity between actual and novel fMRI-personalized targets associated with greater improvement in memory performance. These findings demonstrate the potential importance of aligning brain stimulation targets according to individual-specific differences in brain connectivity, and extend upon recent findings in prefrontal cortex. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. ANN-predictive modeling and GA-optimization for minimizing dimensional tolerance in Polyjet Additive Manufacturing.
- Author
-
Patpatiya, Parth, Shastri, Anshuman, Sharma, Shailly, Chaudhary, Kailash, and Bhatnagar, Varun
- Subjects
THREE-dimensional printing ,STRUCTURAL stability ,FOOD packaging ,GENETIC algorithms ,REGRESSION analysis ,BIOMEDICAL engineering - Abstract
Polyjet Additive Manufacturing (AM) is gaining attention owing to its ability to manufacture intricate parts with microscopic resolution. While extensive research and development have optimized the accuracy and strength of 3D printed structures, however, only a few prediction models exist for predicting the accuracy of the thermoplastic-based components using Polyjet technique. The study presents an accuracy-based predictive model for polyjet AM of thermoplastic structures which significantly raises the fabrication standards and reduces the number of unnecessary experimental attempts. Multivariate Regression Analysis (MVRA) statistical approach and the Generalized Regression Neural Networks (GRNN) approach is carried out to evaluate the effect of numerous significant polyjet printing parameters such as Support Material, Print Mode, Print Orientation, and Thermoplastic resin on the dimensional stability of the printed structures, examined under an ultra-compact 3D laser sensor. The outputs are optimized using the Genetic Algorithm (GA), and the findings are found consistent with experimental trials. This study sets new standards for additive manufacturing of thermoplastic components by examining the influence of polyjet printing factors on the accuracy of the manufactured fastener along the X, Y, and Z axes, respectively. The study may eventually substitute metal fasteners in certain industrial applications such as in fluidics, electronics, biomedical engineering, food packaging, and automobile industries. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Response to rejoinder to: Cochrane et al., Errors and bias in marine conservation and fisheries literature: Their impact on policies and perceptions [Mar. Policy 168 (2024) 106329].
- Author
-
Cochrane, K.L., Butterworth, D.S., Hilborn, R., Parma, A.M., Plagányi, É.E., and Sissenwine, M.P.
- Abstract
We welcome the broad agreement of Sherley et al. [1] in their rejoinder to our paper on errors and bias in marine conservation and fisheries literature with our primary message that scientists should strive for objectivity in their publications and try to avoid publishing misleading science. However, we do not agree with their criticisms of that paper. In their rejoinder, Sherley et al. [1] focus on the estimates of the effect of island closures on penguin demographics in some of the papers criticized by Cochrane et al., but it is the values in those papers of the precision (variance) of the estimates, and the associated implications for management advice, that are at issue. The challenge in the rejoinder of our observation that one of those papers provides an example of scientific neocolonialism is examined and refuted. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
42. Statistical approach for highest precision numerical differentiation.
- Author
-
Liptaj, Andrej
- Subjects
- *
NUMERICAL differentiation - Abstract
In the case of the numerical differentiation (ND), the numerical evidence shows that the round-off errors have a large random-like component when considered as a function of the size of the discretization parameter. If a derivative is evaluated on a computer many times with different but reasonable discretization parameters, the round-off errors have tendency to average out and one gets results where the related uncertainty is largely suppressed. Applying this approach to a situation where the round-off error dominates over the discretization error (i.e. the discretization parameter is chosen small), one can effectively increase the precision of the ND by several orders of magnitude in the absolute error. For a general differentiable (e.g. non-analytic) black-box function differentiated in the context of a fixed machine epsilon, the presented method is presumably the most precise way of performing the ND nowadays known. The method is of practical use and has a potential to be generalized to other numerical procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Naive Bayes classifier – An ensemble procedure for recall and precision enrichment.
- Author
-
Peretz, Or, Koren, Michal, and Koren, Oded
- Subjects
- *
NAIVE Bayes classification , *MACHINE learning , *CLASSIFICATION algorithms , *FEATURE selection , *DECISION making - Abstract
Data is essential for an organization to develop and make decisions efficiently and effectively. Machine learning classification algorithms are used to categorize observations into classes. The Naive Bayes (NB) classifier is a classification algorithm based on the Bayes theorem and the assumption that all predictors are independent of one another. Since this algorithm is based on probabilities, it is necessary to explore the sample distribution and feature type. This study presents an NB classifier method with enhanced performance among multidimensional and multivariate datasets, named the Naive Bayes Enrichment Method (NBEM). The NBEM is based on automated feature selection using threshold learning and the division of a dataset into sub-datasets according to the feature type. The main advantage of this method is the use of multiple NB classifiers based on different distributions and their combinations to classify a new observation. The final phase includes a weighted classification function that combines the results into a single output. This method was tested with 20 multivariate datasets and compared to other classification models and NB classifier variations. The results showed up to 76.3% improvement in the recall measure using NBEM and up to 43.9% improvement in the F1 score. Furthermore, we found that the error percentage of our method depended on the number of classes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Inter-individual variability in elliptical and diagonal error distributions potentially relevant to optimal motor planning in football instep kicking.
- Author
-
Sado, Natsuki, Yazawa, Morikazu, Tominaga, Tempei, and Akutsu, Kohei
- Subjects
- *
MOTOR ability , *FOOTBALL players , *GAUSSIAN distribution , *BODY movement , *PROBABILITY theory - Abstract
The distribution of motor errors can influence optimal motor planning (where to aim). In football instep kicking, it was shown that ball landing locations exhibit the right-up-left-down elliptical distribution in right-footed kickers and vice versa. However, this was reported as a result of mixed multiple kickers; the individual-level error distribution has been unclear. Here we show substantial inter-individual variability in error shape and error direction in the 30 kicks aimed at a target (1.7 m high, 11.0 m in front) by 27 male football players. All players exhibit right-up-left-down distributions with ellipticity (minor/major radius ratio of the 95% confidence ellipse) ranging from 0.25 to 0.77 and major axis angle ranging from 13 to 67° from the horizontal axis. The mean absolute error and the area of the 95% confidence ellipse are not significantly correlated with major axis angle (ρ ≤ 0.312) and ellipticity (| r | ≤ 0.343). By simulating shots aimed at the top-right and top-left edges of a goal with these observed ranges and normalised ellipse area, we reveal a wide range of probability of shots on goal (top-right: 2.7-fold difference, top-left: 1.5-fold difference) due to inter-individual variability in error shape and direction independent of error size. Further simulation shows that, depending on the shape-direction combination, the aiming points with the same 80% probability of shots on goal change by up to 0.3 m vertically, even for the same minimal error size. We highlight the importance for football players to consider not only accuracy/precision, but also error shape and direction to optimise motor planning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Emerging methods and techniques for cancer biomarker discovery.
- Author
-
Dakal, Tikam Chand, Dhakar, Ramgopal, Beura, Abhijit, Moar, Kareena, Maurya, Pawan Kumar, Sharma, Narendra Kumar, Ranga, Vipin, and Kumar, Abhishek
- Subjects
- *
CIRCULATING tumor DNA , *TUMOR markers , *PROTEIN microarrays , *DNA fingerprinting , *NUCLEOTIDE sequencing , *METABOLOMICS - Abstract
Modern cancer research depends heavily on the identification and validation of biomarkers because they provide important information about the diagnosis, prognosis, and response to treatment of the cancer. This review will provide a comprehensive overview of cancer biomarkers, including their development phases and recent breakthroughs in transcriptomics and computational techniques for detecting these biomarkers. Blood-based biomarkers have great potential for non-invasive tumor dynamics and treatment response monitoring. These include circulating tumor DNA, exosomes, and microRNAs. Comprehensive molecular profiles are provided by multi-omic technologies, which combine proteomics, metabolomics, and genomes to support the identification of biomarkers and the targeting of therapeutic interventions. Genetic changes are detected by next-generation sequencing, and patterns of protein expression are found by protein arrays and mass spectrometry. Tumor heterogeneity and clonal evolution can be understood using metabolic profiling and single-cell studies. It is projected that the use of several biomarkers-genetic, protein, mRNA, microRNA, and DNA profiles, among others-will rise, enabling multi-biomarker analysis and improving individualised treatment plans. Biomarker identification and patient outcome prediction are further improved by developments in AI algorithms and imaging techniques. Robust biomarker validation and reproducibility require cooperation between industry, academia, and doctors. Biomarkers can provide individualized care, meet unmet clinical needs, and enhance patient outcomes despite some obstacles. Precision medicine will continue to take shape as scientific research advances and the integration of biomarkers with cutting-edge technologies continues to offer a more promising future for personalized cancer care. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Improved cut weight predictions from DEXA scans of lamb carcasses enables more accurate allocation of cuts to processing plans.
- Author
-
Calnan, H.B., Williams, A., Alston-Knox, C., Wang, G., Pitchford, W.S., and Gardner, G.E.
- Subjects
- *
DUAL-energy X-ray absorptiometry , *LAMBS , *RACTOPAMINE , *EWES , *PRODUCTION planning - Abstract
The value of precise dual energy X-ray absorptiometry (DEXA) cut weight predictions to lamb allocation to cut plans is unknown. Lambs (n = 191) varying in carcase weight (HSCW) and GR (tissue depth over the 12th rib) were DEXA scanned and boned out to weigh retail cuts. Cut weights were predicted using HSCW; HSCW + GR; HSCW + DEXA and HSCW + DEXA image components in GLM models. DEXA improved cut weight predictions in most cuts (P < 0.05). A dataset of 10,000 carcases was then simulated using the associations between HSCW, GR and cut weights, before being truncated to 4500 lambs representing onel day's HSCW distribution. A lamb Carcase Optimisation Tool scenario was developed with 2–3 cut options per carcase section and cut weight thresholds applied to several cuts. Processing costs, market values and actual cut weights were input into the Optimiser to determine carcase allocation to cut options for optimised profits. This scenario was repeated using the predicted cut weights to determine the cut misallocations caused. DEXA-predicted cut weights produced 16.7% and 8.0% less misallocations than HSCW and GR. DEXA produced 20.8% and 14.3% less misallocations than HSCW and GR in shortloins, and 25.5% and 12.9% less in hindquarters. While cut misallocations have little direct impact on total profits, as product is over and under-valued when misallocated, reducing cut misallocations will improve processor compliance when sorting carcases into cut plans- reducing their need to retrim, downgrade and repackage product or the erosion of customer confidence caused by supplying product not meeting market specifications. • DEXA scans improve prediction of lamb cut weights compared to carcase weight and GR. • Lamb Carcase Optimisation Tool shows how cut weight predictions impact allocation. • DEXA reduces misallocation of lamb carcases to cut plans in optimised processing. • DEXA cut weights reduce misallocations by 17% over carcase weight predictions. • DEXA cut weights reduce misallocations by 8% over GR tissue depth predictions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Transdiagnostic failure to adapt interoceptive precision estimates across affective, substance use, and eating disorders: A replication and extension of previous results.
- Author
-
Lavalley, Claire A., Hakimi, Navid, Taylor, Samuel, Kuplicki, Rayus, Forthman, Katherine L., Stewart, Jennifer L., Paulus, Martin P., Khalsa, Sahib S., and Smith, Ryan
- Subjects
- *
EATING disorders , *INTEROCEPTION , *PATHOLOGICAL psychology , *SUBSTANCE abuse , *LOGISTIC regression analysis , *DRUG target - Abstract
Recent Bayesian theories of interoception suggest that perception of bodily states rests upon a precision-weighted integration of afferent signals and prior beliefs. In a previous study, we fit a computational model of perception to behavior on a heartbeat tapping task to test whether aberrant precision-weighting could explain misestimation of cardiac states in psychopathology. We found that, during an interoceptive perturbation designed to amplify afferent signal precision (inspiratory breath-holding), healthy individuals increased the precision-weighting assigned to ascending cardiac signals (relative to resting conditions), while individuals with anxiety, depression, substance use disorders, and/or eating disorders did not. In this pre-registered study, we aimed to replicate and extend our prior findings in a new transdiagnostic patient sample (N = 285) similar to the one in the original study. As expected, patients in this new sample were also unable to adjust beliefs about the precision of cardiac signals – preventing the ability to accurately perceive changes in their cardiac state. Follow-up analyses combining samples from the previous and current study (N = 719) also afforded power to identify group differences between narrower diagnostic categories, and to examine predictive accuracy when logistic regression models were trained on one sample and tested on the other. With this confirmatory evidence in place, future studies should examine the utility of interoceptive precision measures in predicting treatment outcomes and test whether these computational mechanisms might represent novel therapeutic targets. • We used a computational approach to model behavior on a cardiac interoception task. • A patient group included affective, substance use, and eating disorders. • Unlike healthy individuals, patients did not update sensory precision estimates. • This replicated results found in a previous sample of similar patients. • Combining samples, we also confirmed this effect in narrower diagnostic categories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Validation of the Lead Care II System in Cape vultures (Gyps coprotheres) in comparison to ICP-MS using pure standards.
- Author
-
Naidoo, V. and Wolter, K.
- Subjects
- *
LEAD exposure , *LEAD , *VULTURES , *BLOOD sampling , *BIRDS of prey - Abstract
Lead toxicosis remains a concern in raptors, especially following feeding on carcasses sourced from hunting. Rapid diagnosis of lead exposure and easy field monitoring is desirable. The LeadCareII analytical system, validated for rapid diagnoses of lead toxicity in humans, has been described as a useful evaluation system in various species. For this study we attempt to validate the LeadCareII system in the Cape Vulture (CV) (Gyps coprotheres). Blood samples from CV housed under captive conditions and low background lead exposure, were pooled and spiked with known concentrations of a lead standard (0–60 µg/dL). Samples were analyzed by the LeadCareII system and by ICP-MS. The final results showed that despite good linearity the LeadCareII system underestimated lead concentrations by up to 50 %. While the results can be corrected by the derived equation, this is not supported due to the large underestimations evident. The reason for the underestimation is presently unknown. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A collaborative study on the precision of the Markov chain Monte Carlo algorithms used for DNA profile interpretation.
- Author
-
Riman, Sarah, Bright, Jo-Anne, Huffman, Kaitlin, Moreno, Lilliana I., Liu, Sicen, Sathya, Asmitha, and Vallone, Peter M.
- Subjects
MARKOV chain Monte Carlo ,ENVIRONMENTAL research ,RANDOM numbers ,FOREST measurement ,ENVIRONMENTAL sciences - Abstract
Several fully continuous probabilistic genotyping software (PGS) use Markov chain Monte Carlo algorithms (MCMC) to assign weights to different proposed genotype combinations at a locus. Replicate interpretations of the same profile in these software are expected not to produce identical weights and likelihood ratio (LR) values due to the Monte Carlo aspect. This paper reports a detailed precision study under reproducibility conditions conducted as a collaborative exercise across the National Institute of Standards and Technology (NIST), Federal Bureau of Investigation (FBI), and Institute of Environmental Science and Research (ESR). Replicate interpretations generated across the three laboratories used the same input files, software version, and settings but different random number seed and different computers. This work demonstrates that using different computers to analyze replicate interpretations does not contribute to any variations in LR values. The study quantifies the magnitude of differences in the assigned LRs that is only due to run-to-run MCMC variability and addresses the potential explanations for the observed differences. • This collaborative study is an exploration of precision under reproducibility conditions. • We explore the differences in the assigned LR values attributed solely to the stochasticity of the MCMC resampling method. • The majority of log 10 (LR) of the H 1 -true tests were within 0 to 1 order of magnitude. • Replicate runs that resulted in an LR difference of more than 1 order of magnitude on the log 10 scale are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. An improved methodology for high-resolution LA-ICP-MS trace-element fingerprinting of tephra layers: Insights from the Upper and Lower Nariokotome Tuffs, Turkana Basin, Kenya.
- Author
-
Samim, Saini, Dalton, Hayden, Hergt, Janet, Greig, Alan, and Phillips, David
- Subjects
- *
INDUCTIVELY coupled plasma mass spectrometry , *LASER ablation inductively coupled plasma mass spectrometry , *VOLCANIC ash, tuff, etc. , *TRACE elements , *TRACE element analysis , *TRACE elements in water , *OBSIDIAN , *TEPHROCHRONOLOGY , *SPATIAL resolution - Abstract
Single grain, laser-ablation inductively-coupled-plasma mass spectrometry (LA-ICP-MS) trace element analysis of volcanic glass shards has emerged as a valuable tool in constructing regional tephrostratigraphic frameworks for paleoanthropological sites. The limiting factors in analysing single shards for trace element compositions are a) analytical precision (% RSD), which is dependent on the size and homogeneity of the area available for analysis, and b) accuracy (% bias), which is dependent on compositional differences between natural rhyolitic samples and available reference materials. The limited vertical thickness of the tephra glasses, the small surface areas accessible for ablation and the presence of inclusions requires increased spatial resolution, ideally utilizing laser spot diameters ≤20 μm, to achieve effective 'fingerprinting' of tephra layers. Typically, LA-ICP-MS, 'spot' analyses at such high spatial resolution yield degraded precision (>10% RSDs) and accuracy (> ±5% bias) for natural tephra samples compared to reference materials. Here we present a novel approach for LA-ICP-MS trace element analysis of tephra glass and apply the method to two Plio-Pleistocene tephra layers, namely the Upper and Lower Nariokotome tuffs, and their enclosed pumice clasts, from the Turkana Basin, NW Kenya. These tuffs were chosen as they are characterised by homogeneous yet distinct major element glass composition and are examples of tephra layers of significant paleoanthropological importance. Our approach involves the use of ablation 'traverses' across individual glass fragments and we utilize the Lower Nariokotome Tuff, as a matrix-matched secondary reference material for our analytically 'unknown' samples. This method yields significant improvement in both analytical precision (1–5% RSD) and accuracy (<±5% bias) for 25 trace elements, using both 10 μm and 20 μm beam diameters. These results allow us to identify trace element discriminators for these tuffs and successfully correlate pumices to their respective eruption events. This study contributes to the developing field of single-grain LA-ICP-MS trace element geochemistry as a tephra correlation tool by enhancing the data quality acquired for natural samples. • Traverse ablation strategy in LA-ICP-MS trace element geochemistry improves precision. • Trace element geochemistry provides 'fingerprints' for tephra identification. • Improved precision enables identification of co-magmatic pumice and tephra layers. • Improved analytical approach for inter- and intra-basin tephrochronology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.