16,722 results on '"Data Reduction"'
Search Results
2. MPCA: Constructing the APTs provenance graphs through multi-perspective confidence and association
- Author
-
Zhang, Zhao, Luo, Senlin, Guan, Yingdan, and Pan, Limin
- Published
- 2025
- Full Text
- View/download PDF
3. Multitask methods for predicting molecular properties from heterogeneous data.
- Author
-
Fisher, K. E., Herbst, M. F., and Marzouk, Y. M.
- Subjects
- *
KRIGING , *DENSITY functional theory , *DATA reduction - Abstract
Data generation remains a bottleneck in training surrogate models to predict molecular properties. We demonstrate that multitask Gaussian process regression overcomes this limitation by leveraging both expensive and cheap data sources. In particular, we consider training sets constructed from coupled-cluster (CC) and density functional theory (DFT) data. We report that multitask surrogates can predict at CC-level accuracy with a reduction in data generation cost by over an order of magnitude. Of note, our approach allows the training set to include DFT data generated by a heterogeneous mix of exchange–correlation functionals without imposing any artificial hierarchy on functional accuracy. More generally, the multitask framework can accommodate a wider range of training set structures—including the full disparity between the different levels of fidelity—than existing kernel approaches based on Δ-learning although we show that the accuracy of the two approaches can be similar. Consequently, multitask regression can be a tool for reducing data generation costs even further by opportunistically exploiting existing data sources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Hybrid optimization‐based topology construction and DRNN‐based prediction method for data reduction in IoT.
- Author
-
Pawar, Bhakti B. and Jadhav, Devyani S.
- Subjects
- *
RECURRENT neural networks , *WIRELESS sensor networks , *DATA reduction , *INTERNET of things , *PREDICTION models - Abstract
Summary: The Internet of Things (IoT) acts as a prevalent networking setup that plays a vital role in everyday activities due to the increased services provided through uniform data collection. In this research paper, a hybrid optimization approach for the construction of heterogeneous multi‐hop IoT wireless sensor network (WSN) network topology and data aggregation and reduction is performed using a deep learning model. Initially, the IoT network is stimulated and the network topology is constructed using Namib Beetle Spotted Hyena Optimization (NBSHO) by considering different network parameters and encoding solutions. Moreover, the data aggregation and reduction in the IoT network are performed using a Deep Recurrent Neural Network (DRNN)‐based prediction model. In addition, the performance improvement of the designed NBSHO + DRNN approach is validated. Here, the designed NBSHO + DRNN method achieved a packet delivery ratio (PDR) of 0.469, energy of 0.367 J, prediction error of 0.237, and delay of 0.595 s. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Cavity formation in silica‐filled rubber compounds observed during deformation by ultra small‐angle x‐ray scattering.
- Author
-
Yakovlev, Ilya, Sztucki, Michael, Fleck, Frank, Karimi‐Varzaneh, Hossein Ali, Lacayo‐Pineda, Jorge, Vatterott, Christoph, and Giese, Ulrich
- Subjects
STATISTICAL measurement ,STRUCTURAL stability ,CAVITATION ,DATA reduction ,RUBBER - Abstract
When silica‐filled rubber compounds are deformed, structural modifications in the material's bulk lead to irreversible damage, the most significant of which is cavitation appearing within the interfaces of interconnected polymer and filler networks. This work introduces a new method to analyze cavitation in industrial‐grade rubbers based on ultra small‐angle x‐ray scattering. This method employs a specially designed multi‐sample stretching device for high‐throughput measurements with statistical relevance. The proposed data reduction approach allows for early detection and quantification of cavitation while providing at the same time information on the hierarchical filler structures at length scales ranging from the primary particle size to large silica agglomerates over four orders of magnitude. To validate the method, the scattering of SSBR rubber compounds filled with highly dispersible silica at different ratios was measured under quasi‐static strain. The strain was applied in incremental steps up to a maximum achievable elongation or breakage of the sample. From the measurements performed in multiple repetitions, it was found that the minimum strain necessary for cavity formation and the size evolution of the cavities with increasing strain are comparable between these samples. The sample with the highest polymer content showed the lowest rate of cavity formation and higher durability of silica structures. The structural stability of the compounds was determined by the evolution of the filler hierarchical structures, obtained by fitting data across the available strain range. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. SZ4IoT: an adaptive lightweight lossy compression algorithm for diverse IoT devices and data types: SZ4IoT: an adaptive lightweight lossy compression algorithm...: S. K. Idrees et al.
- Author
-
Kadhum Idrees, Sara, Azar, Joseph, Couturier, Raphaël, Kadhum Idrees, Ali, and Gechter, Franck
- Abstract
The Internet of Things (IoT) is an essential platform for industrial applications since it enables massive systems connecting many IoT devices for analytical data collection. This attribute is responsible for the exponential development in the amount of data created by IoT devices. IoT devices can generate voluminous amounts of data, which may place extraordinary demands on their limited resources, data transfer bandwidths, and cloud storage. Using lightweight IoT data compression techniques is a practical way to deal with these problems. This paper presents adaptable lightweight SZ lossy compression algorithm for IoT devices (SZ4IoT), a lightweight and adjusted version of the SZ lossy compression method. The SZ4IoT is a local (non-distributed) and interpolation-based compressor that can accommodate any sensor data type and can be implemented on microcontrollers with low resources. It operates on univariate and multivariate time series. It was implemented and tested on various devices, including the ESP32, Teensy 4.0, and RP2040, and evaluated on multiple datasets. The experiments of this paper focus on the compression ratio, compression and decompression time, normalized root mean square error (NRMSE), and energy consumption and prove the effectiveness of the proposed approach. The compression ratio outperforms LTC, WQT RLE, and K RLE by two, three, and two times, respectively. The proposed SZ4IoT decreased the consumed energy for the data size 40 KB by 31.4, 29.4, and 27.3% compared with K RLE, LTC, and WQT RLE, respectively. In addition, this paper investigates the impact of stationary versus non-stationary time series datasets on the compression ratio. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. Estimating evolutionary and demographic parameters via ARG-derived IBD.
- Author
-
Huang, Zhendong, Kelleher, Jerome, Chan, Yao-ban, and Balding, David
- Subjects
- *
FAMILY history (Genealogy) , *DATA reduction , *BAYESIAN field theory , *SAMPLE size (Statistics) , *GENOMES - Abstract
Inference of evolutionary and demographic parameters from a sample of genome sequences often proceeds by first inferring identical-by-descent (IBD) genome segments. By exploiting efficient data encoding based on the ancestral recombination graph (ARG), we obtain three major advantages over current approaches: (i) no need to impose a length threshold on IBD segments, (ii) IBD can be defined without the hard-to-verify requirement of no recombination, and (iii) computation time can be reduced with little loss of statistical efficiency using only the IBD segments from a set of sequence pairs that scales linearly with sample size. We first demonstrate powerful inferences when true IBD information is available from simulated data. For IBD inferred from real data, we propose an approximate Bayesian computation inference algorithm and use it to show that even poorly-inferred short IBD segments can improve estimation. Our mutation-rate estimator achieves precision similar to a previously-published method despite a 4 000-fold reduction in data used for inference, and we identify significant differences between human populations. Computational cost limits model complexity in our approach, but we are able to incorporate unknown nuisance parameters and model misspecification, still finding improved parameter inference. Author summary: Samples of genome sequences can be informative about the history of the population from which they were drawn, and about mutation and other processes that led to the observed sequences. However, obtaining reliable inferences is challenging, because of the complexity of the underlying processes and the large amounts of sequence data that are often now available. A common approach to simplifying the data is to use only genome segments that are very similar between two sequences, called identical-by-descent (IBD). The longer the IBD segment the more informative it is about recent shared ancestry, and current approaches restrict attention to IBD segments above a length threshold. We instead are able to use IBD segments of any length, allowing us to extract much more information from the sequence data. To reduce the computational burden we identify subsets of the available sequence pairs that lead to little information loss. Our approach exploits recent advances in inferring the genealogical history underlying the sample of sequences. Computational cost still limits the size and complexity of problems our method can handle, but where feasible we obtain dramatic improvements in the power of inferences. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Using low-discrepancy points for data compression in machine learning: an experimental comparison.
- Author
-
Göttlich, S., Heieck, J., and Neuenkirch, A.
- Subjects
- *
ARTIFICIAL intelligence , *BIG data , *DATA compression , *MACHINE learning , *IMAGE processing , *K-means clustering - Abstract
Low-discrepancy points (also called Quasi-Monte Carlo points) are deterministically and cleverly chosen point sets in the unit cube, which provide an approximation of the uniform distribution. We explore two methods based on such low-discrepancy points to reduce large data sets in order to train neural networks. The first one is the method of Dick and Feischl (J Complex 67:101587, 2021), which relies on digital nets and an averaging procedure. Motivated by our experimental findings, we construct a second method, which again uses digital nets, but Voronoi clustering instead of averaging. Both methods are compared to the supercompress approach of (Stat Anal Data Min ASA Data Sci J 14:217–229, 2021), which is a variant of the K-means clustering algorithm. The comparison is done in terms of the compression error for different objective functions and the accuracy of the training of a neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. The ECP ALPINE project: In situ and post hoc visualization infrastructure and analysis capabilities for exascale.
- Author
-
Ahrens, James, Arienti, Marco, Ayachit, Utkarsh, Bennett, Janine, Binyahib, Roba, Biswas, Ayan, Bremer, Peer-Timo, Brugger, Eric, Bujack, Roxana, Carr, Hamish, Chen, Jieyang, Childs, Hank, Dutta, Soumya, Essiari, Abdelilah, Geveci, Berk, Harrison, Cyrus, Hazarika, Subhashis, Fulp, Megan Hickman, Hristov, Petar, and Huang, Xuan
- Subjects
- *
SCIENTIFIC visualization , *DATA reduction , *DATA visualization , *DATA analysis , *ALGORITHMS - Abstract
A significant challenge on an exascale computer is the speed at which we compute results exceeds by many orders of magnitude the speed at which we save these results. Therefore the Exascale Computing Project (ECP) ALPINE project focuses on providing exascale-ready visualization solutions including in situ processing. In situ visualization and analysis runs as the simulation is run, on simulations results are they are generated avoiding the need to save entire simulations to storage for later analysis. The ALPINE project made post hoc visualization tools, ParaView and VisIt, exascale ready and developed in situ algorithms and infrastructures. The suite of ALPINE algorithms developed under ECP includes novel approaches to enable automated data analysis and visualization to focus on the most important aspects of the simulation. Many of the algorithms also provide data reduction benefits to meet the I/O challenges at exascale. ALPINE developed a new lightweight in situ infrastructure, Ascent. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. ZFP: A compressed array representation for numerical computations.
- Author
-
Lindstrom, Peter, Hittinger, Jeffrey, Diffenderfer, James, Fox, Alyson, Osei-Kuffuor, Daniel, and Banks, Jeffrey
- Subjects
- *
PARTIAL differential equations , *RELATIVE motion , *DATA reduction , *ARITHMETIC , *ALGORITHMS - Abstract
HPC trends favor algorithms and implementations that reduce data motion relative to FLOPS. We investigate the use of lossy compressed data arrays in place of traditional IEEE floating point arrays to store the primary data of calculations. Simulation is fundamentally an exercise in controlled approximation, and error introduced by finite-precision arithmetic (or lossy compression) is just one of several sources of error that need to be managed to ensure sufficient accuracy in a computed result. We describe ZFP, a compressed numerical format designed for in-memory storage of multidimensional arrays, and summarize theoretical results that demonstrate that the error of repeated lossy compression can be bounded and controlled. Furthermore, we establish a relationship between grid resolution and compression-induced errors and show that, contrary to conventional floating point, ZFP reduces finite-difference errors with finer grids. We present example calculations that demonstrate data reduction by 4x or more with negligible impact on solution accuracy. Our results further demonstrate several orders-of-magnitude increase in accuracy using ZFP over IEEE floating point and Posits for the same storage budget. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Compressive sensing principles applied in time and space for three‐dimensional land seismic data acquisition and processing.
- Author
-
Jeong, Woodon, Tsingas, Constantinos, Almubarak, Mohammed S., and Ma, Yue
- Subjects
- *
ACQUISITION of data , *DATA reduction , *INVERSE problems , *ELECTRONIC data processing , *SIGNAL processing , *THRESHOLDING algorithms - Abstract
Compressive sensing introduces novel perspectives on non‐uniform sampling, leading to substantial reductions in acquisition cost and cycle time compared to current seismic exploration practices. Non‐uniform spatial sampling, achieved through source and/or receiver areal distributions, and non‐uniform temporal sampling, facilitated by simultaneous‐source acquisition schemes, enable compression and/or reduction of seismic data acquisition time and cost. However, acquiring seismic data using compressive sensing may encounter challenges such as an extremely low signal‐to‐noise ratio and the generation of interference noise from adjacent sources. A significant challenge to this innovative approach is to demonstrate the translation of theoretical gains in sampling efficiency into operational efficiency in the field. In this study, we propose a spatial compression scheme based on compressive sensing theory, aiming to obtain an undersampled survey geometry by minimizing the mutual coherence of a spatial sampling operator. Building upon an optimised spatial compression geometry, we subsequently consider temporal compression through a simultaneous‐source acquisition scheme. To address challenges arising from the recorded compressed seismic data in the non‐uniform temporal and spatial domains, such as missing traces and crosstalk noise, we present a joint deblending and reconstruction algorithm. Our proposed algorithm employs the iterative shrinkage‐thresholding method to solve an ℓ2–ℓ1 optimization problem in the frequency–wavenumber–wavenumber (ω–kx–ky) domain. Numerical experiments demonstrate that the proposed algorithm produces excellent deblending and reconstruction results, preserving data quality and reliability. These results are compared with non‐blended and uniformly acquired data from the same location illustrating the robustness of the application. This study exemplifies how the theoretical improvements based on compressive sensing principles can significantly impact seismic data acquisition in terms of spatial and temporal sampling efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
12. In‐Sensor Touch Analysis for Intent Recognition.
- Author
-
Xu, Yijing, Yu, Shifan, Liu, Lei, Lin, Wansheng, Cao, Zhicheng, Hu, Yu, Duan, Jiming, Huang, Zijian, Wei, Chao, Guo, Ziquan, Wu, Tingzhu, Chen, Zhong, Liao, Qingliang, Zheng, Yuanjin, and Liao, Xinqin
- Subjects
- *
TACTILE sensors , *ARTIFICIAL intelligence , *SENSOR arrays , *STIMULUS & response (Psychology) , *DATA reduction - Abstract
Tactile intent recognition systems, which are highly desired to satisfy human's needs and humanized services, shall be accurately understanding and identifying human's intent. They generally utilize time‐driven sensor arrays to achieve high spatiotemporal resolution, however, which encounter inevitable challenges of low scalability, huge data volumes, and complex processing. Here, an event‐driven intent recognition touch sensor (IR touch sensor) with in‐sensor computing capability is presented. The merit of event‐driven and in‐sensor computing enables the IR touch sensor to achieve ultrahigh resolution and obtain complete intent information with intrinsic concise data. It achieves critical signal extraction of action trajectories with a rapid response time of 0.4 ms and excellent durability of >10 000 cycles, bringing an important breakthrough of tactile intent recognition. Versatile applications prove the integrated functions of the IR touch sensor for great interactive potential in all‐weather environments regardless of shading, dynamics, darkness, and noise. Unconscious and even hidden action features can be perfectly extracted with the ultrahigh recognition accuracy of 98.4% for intent recognition. The further auxiliary diagnostic test demonstrates the practicability of the IR touch sensor in telemedicine palpation and therapy. This groundbreaking integration of sensing, data reduction, and ultrahigh‐accuracy recognition will propel the leapfrog development for conscious machine intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Editorial: Pushing frontiers—imaging for photon science.
- Author
-
Sedgwick, Iain, Wunderer, Cornelia B., and Zhang, Jiaguo
- Subjects
X-ray lasers ,PARTICLE physics ,HARD X-rays ,IMAGE converters ,SOFT X rays ,PHOTON detectors ,FREE electron lasers - Abstract
The editorial in "Frontiers in Physics" discusses the challenges and advancements in developing detectors for photon science, focusing on X-ray imaging detectors and sensors. The text highlights the need for detectors to meet the performance increase of new photon sources like Free Electron Lasers and Diffraction-Limited Storage Rings. It also addresses the challenges of data reduction and processing, as well as operational complexities in running imaging systems at photon science facilities. The editorial emphasizes the importance of simplifying system integration and calibration to enhance user interest and data quality in imaging detectors for photon science. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
14. Precision of Estimated Growth Parameters of Yellowfin Tuna (Thunnus albacares) From Length‐Frequency Data Estimated by Bootstrapping.
- Author
-
Taufani, Wiwiet Teguh and Matsuishi, Takashi Fritz
- Subjects
- *
YELLOWFIN tuna , *FISH populations , *FISHERY management , *FISHERIES , *DATA reduction - Abstract
ABSTRACT Over 60% of the world's fish stocks suffer from limited data, which hampers effective fisheries management. Researchers have developed stock assessment methods for data‐limited fisheries using length‐frequency data, but reliability was questionable and not well researched. We evaluated the precision of the widely used length‐based method ELEFAN using 24 months of length‐frequency data from 14,190 individual yellowfin tuna and sequential and interval data fractions. Using bootstrapping (1000 times) and data reduction, growth parameters and precision L∞$$ {L}_{\infty } $$, K$$ K $$, and Φ′ were estimated. The CVs of L∞$$ {L}_{\infty } $$, K$$ K $$, and Φ′ were 2.55%, 23.04%, and 2.35%, respectively. From the result of data reduction, at least once in 1 or 2 months and 12 times measurements with 500 data per measurement on average is recommended for achieving high precision with CV of Φ′ < 3%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Surgical Synergy in Primary Open-Angle Glaucoma: Assessing Safety and Efficacy of Hydrus, iStent, and Gonioscopy-Assisted Transluminal Trabeculotomy in Glaucoma Management.
- Author
-
Ayoub, Mohammad Zeyad Mohammad and Al-Nahrawy, Ahmed
- Subjects
- *
OPEN-angle glaucoma , *MINIMALLY invasive procedures , *INTRAOCULAR pressure , *MEDICATION safety , *DATA reduction - Abstract
Background/Objectives: This paper will compare the outcomes—safety and efficacy—of three minimally invasive glaucoma surgeries (MIGSs),the Hydrus Microstent, iStent, and Gonioscopy-Assisted Transluminal Trabeculotomy (GATT), for intraocular pressure (IOP) reduction in patients with primary open-angle glaucoma (POAG). Methods: A literature search of Ovid Medline and Embase identified studies evaluating the Hydrus, iStent, and GATT. Data on IOP reduction, medication use, and complications were analyzed. Results: Studies show the Hydrus, iStent, and GATT reduce IOP and medication burden in POAG patients, with some complications. For the Hydrus, studies showed 37.09% (27.5 ± 4.4 to 17.3 ± 3.7 mmHg) and 25% (16.8 to 12.6 mmHg) IOP reduction. Meanwhile, medication burden decreased from 2.5 ± 0.7 to 1.0 and from 2.1 to 1.15. For the iStent, studies showed a 36.39% (21.1 to 13.4 mmHg) and 8.19% (17.1 to 15.7 mmHg) IOP drop. Medication burden decreased from 2.87 to 1.24 and from 1.7 to 0.26. For GATT, studies showed a 49.33% (27.70 ± 10.30 to 14.04 ± 3.75) and 39.09% (26.40 ± 6.37 to 16.08 ± 2.38) IOP drop. Medication burden reduced from 3.73 ± 0.98 to 1.82 ± 1.47 and from 3.12 ± 0.80 to 0.45 ± 0.96. Conclusions: The Hydrus, iStent, and GATT are effective alternatives to trabeculectomy for mild to moderate POAG. They reduce and control IOP and dependence on medications with manageable safety profiles. In all three options, there were some clinically significant complications based on the p-value. For the Hydrus, it was PAS. For the iStent, they were PAS, FB sensation, IOP spikes, and microhyphema. For GATT, it was IOP spikes. However, further long-term studies, especially randomized controlled trials, are needed to support these results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Memristor based on carbon nanotube gelatin composite film as artificial optoelectronic synapse for image processing.
- Author
-
Sun, Yanmei, Li, Bingxun, Liu, Ming, and Zhang, Zekai
- Subjects
- *
CARBON nanotubes , *OPTOELECTRONIC devices , *IMAGE processing , *CHARGE transfer , *DATA reduction - Abstract
[Display omitted] Photoelectric artificial synapses based on memristors is an effective method to realize neuromorphic computation. This study presents an optoelectronic responsive artificial synapse made of a composite material consisting of gelatin and carbon nanotubes. The memristor demonstrates characteristics of analog resistive switching, the ability to store multiple memory states, and impressive retention properties. It has the capability to induce an excitatory post-synaptic current by means of electrical pulses or pulsed light exposure. The excitatory post-synaptic current can be modulated by the number, amplitude and interval of electrical pulses, as well as the action time, interval and light intensity of optical pulses. The artificial synapse showcases the emulation of fundamental Hebbian learning protocols, including spike timing dependent plasticity and spike amplitude dependent plasticity. In addition, the charge transfer in the carbon nanotube gelatin composite optoelectronic memristor is investigated through first-principles calculations, shedding light on its operational mechanism. Experimental results show that these devices have the potential to be utilized for processing image information, resulting in a significant reduction of input data and training expenses when recognizing handwritten numbers. Overall, the optoelectronic synapse exhibits promising image processing prospects in the field of neuromorphic computing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. ClusFC‐IoT: A clustering‐based approach for data reduction in fog‐cloud‐enabled IoT.
- Author
-
Hemmati, Atefeh and Rahmani, Amir Masoud
- Subjects
DATA reduction ,COMPUTER workstation clusters ,INTERNET of things ,DATA management ,DATA transmission systems - Abstract
Summary: The Internet of Things (IoT) is an ever‐expanding network technology that connects diverse objects and devices, generating vast amounts of heterogeneous data at the network edge. These vast volumes of data present significant challenges in data management, transmission, and storage. In fog‐cloud‐enabled IoT, where data are processed at the edge (fog) and in the cloud, efficient data reduction strategies become imperative. One such method is clustering, which groups similar data points together to reduce redundancy and facilitate more efficient data management. In this paper, we introduce ClusFC‐IoT, a novel two‐phase clustering‐based approach designed to optimize the management of IoT‐generated data. In the first phase, which is performed in the fog layer, we used the K‐means clustering algorithm to group the received data from the IoT layer based on similarity. This initial clustering creates distinct clusters, with a central data point representing each cluster. Incoming data from the IoT side is assigned to these existing clusters if they have similar characteristics, which reduces data redundancy and transfers to the cloud layer. In a second phase performed in the cloud layer, we performed additional K‐means clustering on the data obtained from the fog layer. In this secondary clustering phase, we stabilized the similarities between the clusters created in the fog layer further optimized the data display, and reduced the redundancy. To verify the effectiveness of ClusFC‐IoT, we implemented it using four different IoT data sets in Python 3. The implementation results show a reduction in data transmission compared to other methods, which makes ClusFC‐IoT very suitable for resource‐constrained IoT environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Markerless vision-based knee osteoarthritis classification using machine learning and gait videos.
- Author
-
Ben Hassine, Slim, Balti, Ala, Abid, Sabeur, Ben Khelifa, Mohamed Moncef, and Sayadi, Mounir
- Subjects
MACHINE learning ,MOTION capture (Human mechanics) ,KNEE osteoarthritis ,MODEL railroads ,DATA reduction ,GAIT in humans - Abstract
Introduction: Knee osteoarthritis (KOA) is a major health issue affecting millions worldwide. This study employs machine learning algorithms to analyze human gait using kinematic data, aiming to enhance the diagnosis and detection of KOA. By adopting this approach, we contribute to the development of an effective diagnostic methods for KOA, a prevalent joint condition. Methods: The methodology is structured around several critical steps to optimize the model's performance. These steps include extracting kinematic features from video data to capture essential gait dynamics, applying data filtering and reduction techniques to remove noise and enhance data quality, and calculating key gait parameters to boost the model's predictive power. The machine learning model trains on these refined features, validates through cross-validation for robust performance assessment, and tests on unseen data to ensure generalizability. Results: Our approach demonstrates significant improvements in classification accuracy, highlighting its potential for early and precise KOA detection. The model achieves a high classification accuracy, indicating its effectiveness in distinguishing KOA-related gait patterns. Discussion: Furthermore, a comparative analysis with another model trained on the same dataset demonstrates the superiority of our method, suggesting that the proposed approach serves as a reliable tool for early KOA detection and potentially improves clinical diagnostic workflows. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Unlocking the power of multi-institutional data: Integrating and harmonizing genomic data across institutions.
- Author
-
Chen, Yuan, Shen, Ronglai, Feng, Xiwen, and Panageas, Katherine
- Subjects
- *
LATENT variables , *CANCER patient care , *PARAMETER estimation , *DATA integration , *DATA reduction - Abstract
Cancer is a complex disease driven by genomic alterations, and tumor sequencing is becoming a mainstay of clinical care for cancer patients. The emergence of multi-institution sequencing data presents a powerful resource for learning real-world evidence to enhance precision oncology. GENIE BPC, led by American Association for Cancer Research, establishes a unique database linking genomic data with clinical information for patients treated at multiple cancer centers. However, leveraging sequencing data from multiple institutions presents significant challenges. Variability in gene panels can lead to loss of information when analyses focus on genes common across panels. Additionally, differences in sequencing techniques and patient heterogeneity across institutions add complexity. High data dimensionality, sparse gene mutation patterns, and weak signals at the individual gene level further complicate matters. Motivated by these real-world challenges, we introduce the Bridge model. It uses a quantile-matched latent variable approach to derive integrated features to preserve information beyond common genes and maximize the utilization of all available data, while leveraging information sharing to enhance both learning efficiency and the model's capacity to generalize. By extracting harmonized and noise-reduced lower-dimensional latent variables, the true mutation pattern unique to each individual is captured. We assess model's performance and parameter estimation through extensive simulation studies. The extracted latent features from the Bridge model consistently excel in predicting patient survival across six cancer types in GENIE BPC data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. 基于时空邻域关联去噪时间面的事件数据表示.
- Author
-
林凯滨, 陈云华, 钟金煜, and 魏鹏飞
- Subjects
- *
SIGNAL-to-noise ratio , *KERNEL functions , *EXPONENTIAL functions , *COMPUTATIONAL complexity , *DATA reduction - Abstract
Event cameras possess advantages such as ultra-high dynamic range and ultra-low latency. Extracting effective spatio-temporal features from the output data of event cameras through event stream segmentation, filtering and event representation is crucial to leverage these advantages. While existing event representation methods based on timestamps and exponential kernel functions for calculating time surfaces can preserve more informative details in events, they still face issues like high event redundancy and vulnerability to noise events. To address the high redundancy in existing event stream segmentation and filtering methods, this paper proposed a novel event downscaling algorithm based on density sorting. This algorithm analyzed the spatio-temporal neighborhood relationships within the event stream to calculate spatio-temporal correlation density and performed density sorting accordingly, thereby reducing redundant events and minimizing the consumption of computational resources. Furthermore, to address the vulnerability of existing event representations to noise events, this paper introduced an event data representation method based on spatio-temporal neighborhood correlation for denoising on the time surface. This method considered spatio-temporal correlations to form event clusters on the time surface, effectively selecting valid events and enhancing the signal-to-noise ratio while reducing computational complexity. The proposed methods had achieved state-of-theart (SOTA) classification accuracy on three mainstream neuromorphic datasets. In summary, this paper focused on the research of event stream data dimensionality reduction and event representation for event camera object classification, effectively improving the efficiency and accuracy of event camera object classification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Three‐Way Data Reduction Based on Essential Information.
- Author
-
Vitale, Raffaele, Azizi, Azar, Ghaffari, Mahdiyeh, Omidikia, Nematollah, and Ruckebusch, Cyril
- Subjects
- *
COMPUTATIONAL geometry , *CONVEX geometry , *SPECTRAL imaging , *DATA reduction , *FLUORESCENCE spectroscopy - Abstract
In this article, the idea of essential information‐based compression is extended to trilinear datasets. This basically boils down to identifying and labelling the essential rows (ERs), columns (ECs) and tubes (ETs) of such three‐dimensional datasets that allow by themselves to reconstruct in a linear way the entire space of the original measurements. ERs, ECs and ETs can be determined by exploiting convex geometry computational approaches such as convex hull or convex polytope estimations and can be used to generate a reduced version of the data at hand. These compressed data and their uncompressed counterpart share the same multilinear properties and their factorisation (carried out by means of, for example, parallel factor analysis–alternating least squares [PARAFAC‐ALS]) yield, in principle, indistinguishable results. More in detail, an algorithm for the assessment and extraction of the essential information encoded in trilinear data structures is here proposed. Its performance was evaluated in both real‐world and simulated scenarios which permitted to highlight the benefits that this novel data reduction strategy can bring in domains like multiway fluorescence spectroscopy and imaging. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Implementation of the Village Fund Allocation (ADD) Management Policy in Kon Tumere Village, Kabawo Sub-district, Muna Regency.
- Author
-
Siwij, Devie S. R., Hamriadin, Muh., Ramaino, Almen S., and Essing, Ismiaty
- Subjects
- *
COMMUNITY involvement , *DATA reduction , *SEMI-structured interviews , *ACQUISITION of data , *DATA analysis - Abstract
The aim of this research is to find out how the Village Fund Allocation Policy (ADD) is implemented in Kontumere Village, Kabawo District, Muna Regency and to find out the factors that influence the Implementation of the Village Fund Allocation Policy (ADD). In this research, data collection was carried out in the following way, collecting data or information through: observation, semi-structured interviews (open interviews), documentation. Data analysis methods include data reduction, data presentation, and drawing conclusions. When viewed from the perspective of achieving objectives, the results of this research can be concluded that the implementation of the Village Fund Allocation Policy in Kontumere Village, Kabawo District, Muna District has not been achieved comprehensively or optimally. This is influenced by several inhibiting factors, namely (1) communication, the inhibiting factor in this communication is socialization to the public regarding the non-existent ADD policy (weakness). This results in the difficulty of inviting community participation in implementing ADD and in monitoring activities (treatment/threats). (2) resources, lack of ability of implementers to operate computers (Weakness), and lack of adequate village income support (Treatment/Threat), (3) Attitude of Implementers, The inhibiting factor in the attitude of implementers is the lack of responsiveness of ADD implementers (Weakness/Weakness) which considers the ADD policy to be a mere routine policy (Treatment/Threat). (4) Bureaucratic structure, namely the lack of coordination between policy implementers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Hybrid deep learning technique for optimal segmentation and classification of multi-class skin cancer.
- Author
-
Subhashini, G. and Chandrasekar, A.
- Subjects
- *
FEATURE extraction , *SKIN cancer , *TUMOR classification , *DATA reduction , *CANCER diagnosis , *DEEP learning - Abstract
This study introduces a novel deep learning-based approach for skin cancer diagnosis and treatment planning to overcome existing limitations. The proposed system employs a series of innovative algorithms, including IQQO for preprocessing, TSSO for cancer region isolation, and FA-MFC for data dimensionality reduction. The USSL-Net DCNN extracts hidden features, and the BGR-QNN enables multi-class classification. Evaluated on Kaggle and ISIC-2019 datasets, the model achieves impressive accuracy, up to 96.458% for Kaggle and 94.238% for ISIC-2019. This hybrid deep learning technique shows great potential for improving skin cancer classification, thus enhancing diagnosis and treatment outcomes and ultimately reducing mortality rates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Applications of Hyperspectral Imaging Technology Combined with Machine Learning in Quality Control of Traditional Chinese Medicine from the Perspective of Artificial Intelligence: A Review.
- Author
-
Pan, Yixia, Zhang, Hongxu, Chen, Yuan, Gong, Xingchu, Yan, Jizhong, and Zhang, Hui
- Subjects
- *
CHINESE medicine , *ARTIFICIAL intelligence , *IMAGE analysis , *DATA reduction , *RESEARCH personnel - Abstract
Traditional Chinese medicine (TCM) is the treasure of China, and the quality control of TCM is of crucial importance. In recent years, with the quick rise of artificial intelligence (AI) and the rapid development of hyperspectral imaging (HSI) technology, the combination of the two has been widely used in the quality evaluation of TCM. Machine learning (ML) is the core wisdom of AI, and its progress in rapid analysis and higher accuracy improves the potential of applying HSI to the field of TCM. This article reviewed five aspects of ML applied to hyperspectral data analysis of TCM: partition of data set, data preprocessing, data dimension reduction, qualitative or quantitative models, and model performance measurement. The different algorithms proposed by researchers for quality assessment of TCM were also compared. Finally, the challenges in the analysis of hyperspectral images for TCM were summarized, and the future works were prospected. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Innovative TCAM Solutions for IPv6 Lookup: Don't Care Reduction and Data Relocation Techniques.
- Author
-
Anh PHAM, Doanh BUI, Phuc Thien Phan NGUYEN, and Linh TRAN
- Subjects
ASSOCIATIVE storage ,NETWORK routers ,INTERNET protocol address ,DATA reduction ,SUFFIXES & prefixes (Grammar) - Abstract
Ternary Content-Addressable Memory (TCAM) enables high-speed searches by comparing search data with all stored data in a single clock cycle, using ternary logic ("0", "1", "X" for "don't care") for flexible matching. This makes TCAM ideal for applications like network routers and lookup tables. However, TCAM's speed increases silicon area and limits memory capacity. This paper introduces a low-area, enhanced-capacity TCAM for IPv6 lookup tables using Don't Care Reduction (DCR) and Data Relocation (DR) techniques. The DCR technique requires only (N + log2 (N))-bit memory for an N-bit IP address, reducing the need for 2N-bit memory. The DR technique improves TCAM storage capabilities by classifying the IPv6 into 4 different prefix length types and relocating the data in the prefix bit into the "X" cells. The design features a 256 × 128-bit TCAM (eight 32 × 128-bit memory banks) on a 65 nm process with a 1.2 V operation voltage. Results show a 71.47% increase in area efficiency per stored IP value compared to conventional TCAM and a 20.97% increase compared to data-relocation TCAM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Mesh-Float-Zip (MFZ): Manifold harmonic bases for unstructured spatial data compression.
- Author
-
Doherty, Kevin, Becker, Stephen, and Doostan, Alireza
- Subjects
PARTIAL differential equations ,COLLOCATION methods ,SCIENTIFIC computing ,DATA reduction ,JPEG (Image coding standard) - Abstract
Block-coder style data compression schemes, such as JPEG, ZFP and SPERR, rely on de-correlating transformations in order to efficiently store data with entropy encoding schemes. These de-correlating transformations are typically reliant on a uniform spatiotemporal sampling of the transformations and/or the data. Thus, for nonuniform data which may include spatial (and temporal) nonuniformity, such as those arising from partial differential equations solveing over complex geometries and unstructured grids, there is not an efficient de-correlating transformation nor a data compression pipeline that sufficiently adapts to this nonuniformity. We presented the manifold harmonic basis (MHB) as a de-correlating transformation, and prod d a collocation method over the nonuniform domain for solving the eigen problem that defines the MHB. Additionally, we demonstrated a modified version of ZFP's compression pipeline that adapted to the problems presented by nonuniformity in a block-coder style compression scheme. This resulted in a state-of-the-art spatial data compression technique for nonuniform data which we call Mesh-Float-Zip (MFZ). We illustrated the performance of the proposed compression strategy and compared its accuracy against ZFP and SZ compressors (applied to both original and reshuffled data) on three scientific computing datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Lifting MGARD: Construction of (pre)wavelets on the interval using polynomial predictors of arbitrary order.
- Author
-
Reshniak, Viktor, Ferguson, Evan, Gong, Qian, Vidal, Nicolas, Archibald, Richard, and Klasky, Scott
- Subjects
DATA compression ,WAVELET transforms ,DATA reduction ,POLYNOMIALS ,ALGORITHMS - Abstract
MGARD (MultiGrid Adaptive Reduction of Data) is an algorithm for compressing and refactoring scientific data, based on the theory of multigrid methods. The core algorithm is built around stable multilevel decompositions of conforming piecewise linear $ C^0 $ finite element spaces, enabling accurate error control in various norms and derived quantities of interest. In this work, we extend this construction to arbitrary order Lagrange finite elements $ \mathbb{Q}_p $, $ p \geq 0 $, and propose a reformulation of the algorithm as a lifting scheme with polynomial predictors of arbitrary order. Additionally, a new formulation using a compactly supported wavelet basis is discussed, and an explicit construction of the proposed wavelet transform for uniform dyadic grids is described. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. An Orthonormalization-Free and Inversion-Free Algorithm for Online Estimation of Principal Eigenspace.
- Author
-
Zhou, Siyun and Xu, Liwei
- Subjects
MATRIX inversion ,ONLINE algorithms ,DATA reduction ,ALGORITHMS ,COST - Abstract
In this paper, we study the problem of estimating the principal eigenspace over an online data stream. First-order optimization methods are appealing choices for this problem thanks to their high efficiency and easy implementation. The existing first-order solvers, however, require either per-step orthonormalization or matrix inversion, which empirically puts pressure to the parameter tuning, and also incurs extra costs of rank augmentation. To get around these limitations, we introduce a penalty-like term controlling the distance from the Stiefel manifold into matrix Krasulina's method, and propose the first orthonormalization- and inversion-free incremental PCA scheme (Domino). The Domino is shown to achieve the computational speed-up, and own the ability of automatic correction on the numerical rank. It also maintains the advantage of Krasulina's method, e.g., variance reduction on low-rank data. Moreover, both of the asymptotic and non-asymptotic convergence guarantees are established for the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. The Police Authority in Granting Crowd Permit in The Makassar Port Police Area.
- Author
-
Athief, Ammar, Aspan, Zulkifli, Yunus, Ahsan, and Annisa, Arini Nur
- Subjects
CRIMINAL investigation ,POLICE chiefs ,CROWD control ,CRIME statistics ,DATA reduction ,CROWDS - Abstract
Makassar City as the largest city in Eastern Indonesia is known to have a high crime rate. On the other hand, the most common and often faced community dynamics in Makassar city is community mobility that triggers crowds. So as a preventive effort, the police have the authority to issue a crowd permit to regulate community activities. This research aims to analyze the regulations and mechanisms for granting crowd permits and to identify the methods of police supervision of crowd activities in the Makassar Port Police area. The type of research used is empirical juridical research. Data collection is done through interviews and observations. The technique of analyzing research data starts from data reduction, data presentation, to verification and conclusion drawing. The results showed that: (1) Regulations in granting crowd permits are regulated in Law of the Republic of Indonesia Number 2 of 2002 concerning the Indonesian National Police and several Field Guidelines of the National Police Chief. The mechanism for granting permits includes: First, the criteria for activities are given for activities with large masses. Second, the flow of services has been listed in the task guidelines with clear steps. (2) Supervision of crowd activities is carried out through open maintenance of public security and order by the Samapta Bhayangkara (Sabhara) unit and closed by the Intelligence and Security Unit and the Criminal Investigation Unit. Meanwhile, supervision of police members is carried out through control by police administrative staff, discipline enforcement by the Profession and Security Unit, and post-activity and periodic consolidation and evaluation once a year. Thus, the authority of the Police in granting crowd permits in the Makassar Port Police area has been running well, although there are still shortcomings in terms of supervision, especially time security and police availability to accompany crowd activities until completion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Assessing the Impact of Observations on the Brazilian Global Atmospheric Model (BAM) Using Gridpoint Statistical Interpolation (GSI) System.
- Author
-
Viana, Liviany Pereira and de Mattos, João Gerd Zell
- Subjects
NUMERICAL weather forecasting ,DATA assimilation ,ATMOSPHERIC models ,WEATHER forecasting ,DATA reduction - Abstract
This article describes the main features of the impacts of global observations on the reduction of errors in the data assimilation (DA) cycle carried out in the Brazilian Global Atmospheric Model (BAM) at Center for Weather Forecast and Climate Studies [Centro de Previsão de Tempo e Estudos Climáticos (CPTEC)] at the Brazilian National Institute for Space Research [Instituto Nacional de Pesquisas Espaciais (INPE)]. These results show the importance of studying and evaluating the contribution of each observation to the DA system, therefore, two experiments (exp1/exp2) were performed with different configurations of the BAM model, with exp2 presenting the best fit between the Gridpoint Statistical Interpolation (GSI) and BAM systems. The BAM model was validated by the statistical metrics of root mean-square error and correlation anomaly, but this validation is not explored in this paper. A metric was applied that does not depend on the adjoint-based method, but only on the residuals that are made available in the GSI system for the observation space, given by the total impact, the fractional impact and the fractional beneficial impact. In general, the average daily showed that the observations of the global system that contribute most to the reduction of errors in the DA cycle are from the pilot balloon data (−3.54/−3.45 J kg
−1 )and the profilers (−2.13/−1.97 J kg−1 ), and the smallest contributions came from the land (−0.28/−0.29 J kg−1 ) and sea (−0.44/−0.44 J kg−1 ) surfaces. The same pattern was observed for the synoptic times presented. However, when verifying the fraction of the impact by each type of observation, it was found that the radiance data (64.88/30.30%), followed by radiosondes (14.85/27.42%) and satellite winds (11.03/22.70%), are the most important fractions for both experiments. These results show that the DA system is working to generate the best analyses at the research center and that the deficiencies found in some observations can be adjusted to improve the development of the GSI and the BAM model, since together, the entire database used is evaluated, as well as the forecast model itself, indicating the relationship between the assertiveness of the atmospheric model and the DA system used at the research center. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
31. Communication Patterns in Interpersonal Conflict in Premarriage Couples Experiencing Toxic Relationships.
- Author
-
Suciati and Ramadhanty, Syauqinada
- Subjects
INTERPERSONAL communication ,PERSONALITY ,OFFENSIVE behavior ,COMMUNICATION patterns ,DATA reduction - Abstract
This study discusses communication patterns in conflict interaction in premarital couples experiencing toxic relationships. A toxic relationship is unhealthy, characterized by unsupportive attitudes, conflicts in which one tries to destroy the other, competition, disrespect, and lack of cohesiveness. This research employed a descriptive method. Data were collected through interviews. The data validity testing was performed using the source triangulation technique. The data analysis technique utilized the Miles and Huberman analysis model, consisting of data collection, data reduction, data presentation, and verification. This study discovered that conflicts experienced by premarital couples experiencing toxic relationships stemmed from different goals, poor communication, and lack of fulfillment of needs. These results revealed that both partners decided to stay in toxic relationships with different conflict resolution strategies. The first partner used the conflict avoidance style, whereas the second one applied the compromise conflict style. Factors influencing the use of conflict resolution styles for the first couple were perceptions of conflict, relationship dominance, and family closeness, whereas the second partner was influenced by emotional attachment and personality factors. The communication patterns adopted are also different, namely, the pattern of unbalanced separation in the first couple and the balance pattern in the second couple. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A significant features vector for internet traffic classification based on multi-features selection techniques and ranker, voting filters.
- Author
-
Munther, Alhamza, Abualhaj, Mosleh M., Alalousi, Alabass, and Fadhil, Hilal A.
- Subjects
SUPERVISED learning ,MACHINE learning ,FEATURE selection ,INTERNET traffic ,SUPPORT vector machines - Abstract
The pursuit of effective models with high detection accuracy has sparked great interest in anomaly detection of internet traffic. The issue still lies in creating a trustworthy and effective anomaly detection system that can handle massive data volumes and patterns that change in real-time. The detection techniques used, especially the feature selection methods and machine learning algorithms, are crucial to the design of such a system. The fundamental difficulty in feature selection is selecting a smaller subset of features that are more related to the class but are less numerous. To reduce the dimensionality of the dataset, this research offered a multi-feature selection technique (MFST) using four filter techniques: fast correlationbased filter, significance feature evaluator, chi-square, and gain ratio. Each technique's output vector is put via ranker and Borda voting filters. The feature with the highest number of votes and rank values will be selected from the dataset. The performance of the given MFST framework was the best when compared to the four strategies listed above functioning alone; three different classifiers were employed to test the accuracy. C4.5, nave Bayes, and support vector machine. The experiment outcomes employed ten datasets of different sizes with 10,000-300,000 instances. Only 8 out of 248 characteristics were chosen, with classifiers percentages averaging 65%, 93.8%, and 95.5%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Reduction in mucosal‐associated invariant T cells (MAIT) in APECED patients is associated with elevated serum IFN‐γ concentration.
- Author
-
Hetemäki, Iivo, Sarkkinen, Joona, Wong, Huai Hui, Heikkilä, Nelli, Laakso, Saila, Miettinen, Simo, Mäyränpää, Mikko I., Mäkitie, Outi, Arstila, T Petteri, and Kekäläinen, Eliisa
- Subjects
T cells ,DATA reduction ,CD3 antigen ,CANDIDIASIS ,LYMPHOCYTES - Abstract
Mucosal‐associated invariant T cells (MAIT) are innate‐like lymphocytes enriched in mucosal organs where they contribute to antimicrobial defense. APECED is an inborn error of immunity characterized by immune dysregulation and chronic mucocutaneous candidiasis. Reduction in the frequency of circulating MAITs has been reported in many inborn errors of immunity, but only in a few of them, the functional competence of MAITs has been assessed. Here, we show in a cohort of 24 patients with APECED, that the proportion of circulating MAITs was reduced compared with healthy age and sex‐matched controls (1.1% vs. 2.6% of CD3+ T cells; p < 0.001) and the MAIT cell immunophenotype was more activated. Functionally the IFN‐γ secretion of patient MAITs after stimulation was comparable to healthy controls. We observed in the patients elevated serum IFN‐γ (46.0 vs. 21.1 pg/mL; p = 0.01) and IL‐18 (42.6 vs. 13.7 pg/mL; p < 0.001) concentrations. Lower MAIT proportion did not associate with the levels of neutralizing anti‐IL‐22 or anti‐IL‐12/23 antibodies but had a clear negative correlation with serum concentrations of IFN‐γ, IL‐18, and protein C‐reactive protein. Our data suggest that reduction of circulating MAITs in patients with APECED correlates with chronic type 1 inflammation but the remaining MAITs are functionally competent. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Data dimensionality reduction for an optimal switching mode classification applied to a step-down power converter.
- Author
-
Fernandez-Serantes, Luis-Alfonso, Casteleiro-Roca, José-Luis, Berger, Hubert, Simić, Dragan, and Calvo-Rolle, José-Luis
- Subjects
DIMENSIONAL reduction algorithms ,PRINCIPAL components analysis ,POWER electronics ,CONVERTERS (Electronics) ,DATA reduction - Abstract
A dimensional reduction algorithm is applied to an intelligent classification model with the purpose of improving the efficiency and accuracy. The proposed classification model, used to distinguish the operating mode: Hard- and Soft-Switching, is presented and an analysis of the synchronized rectified step-down converter is done. With the aim of improving the accuracy and reducing the computational cost of the model, three different methods for dimensional reduction are applied to the input dataset of the model: self-organizing maps, principal component analysis and correlation matrix. The obtained results show how the number of variable is highly reduced and the performance of the classification model is boosted: the results manifest an improve in the accuracy and efficiency of the classification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. An Evaluation of Kawasan Sehat Program in Layanan Kesehatan Cuma-Cuma Dompet Dhuafa of West Nusa Tenggara for Stunting Prevention Management.
- Author
-
Samara, Shafira Salsabila, Selawati, Selawati, Sari, Martina Tirta, Amelia, Kurnia, Wisastra, Danan Panggih, and Khotibi, Zulkarnaen
- Subjects
PREGNANT women ,DATA reduction ,STUNTED growth ,BUDGET ,NUTRITION surveys ,TODDLERS - Abstract
Background: Nutritional issues among children remain a significant health concern, with stunting being a national priority. Data of the Indonesian Nutrition Status Survey shows an increase in the prevalence of stunting in West Nusa Tenggara (NTB) from 31.4% (2021) to 32.7% (2022). Dompet Dhuafa Foundation has a Kawasan Sehat program to support the government in promoting healthy lifestyles, particularly stunting prevention. Objectives: Evaluating the Kawasan Sehat program in efforts to prevent and manage stunting in Layanan Kesehatan Cuma-Cuma area of NTB. Methods: This evaluative research used a mixed-methods approach. Data were collected through interviews, observations, and document reviews. Informants were selected using purposive sampling. Qualitative data analysis was conducted in stages of data reduction, presentation, and interpretation. Quantitative data were analyzed descriptively to assess stunting prevalence and the achievement of program indicators. Results: Kawasan Sehat program has contributed to preventing and managing stunting. The input components effectively support its implementation, with adequate human resources and budget acquired from partnerships. The program was carried out systematically, including planning involving stakeholders, execution using a combination of community empowerment and charity, and supervision. Various interventions are implemented, including support for pregnant women, breastfeeding mothers, infants, and toddlers. The success of the output was evidenced by the decrease in stunting prevalence in Kawasan Sehat NTB and the achievement of the program indicators. Conclusions: Kawasan Sehat program for stunting intervention has been successful according to input, process, and output. Community empowerment should continuously be strengthened to encourage communities to adopt clean and healthy living behaviors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Addressing overfitting in classification models for transport mode choice prediction: a practical application in the Aburrá Valley, Colombia.
- Author
-
Salazar-Serna, Kathleen, Barona, Sergio A., García, Isabel C., Cadavid, Lorena, and Franco, Carlos J.
- Subjects
- *
K-nearest neighbor classification , *DATA distribution , *RANDOM forest algorithms , *MACHINE learning , *DATA reduction , *CHOICE of transportation , *DIMENSION reduction (Statistics) - Abstract
Overfitting poses a significant limitation in mode choice prediction using classification models, often worsened by the proliferation of features from encoding categorical variables. While dimensionality reduction techniques are widely utilized, their effects on travel-mode choice models’ performance have yet to be comparatively studied. This research compares the impact of dimensionality reduction methods (PCA, CATPCA, FAMD, LDA) on the performance of multinomial models and various supervised learning classifiers (XGBoost, Random Forest, Naive Bayes, K-Nearest Neighbors, Multinomial Logit) for predicting travel mode choice. Utilizing survey data from the Aburrá Valley in Colombia, we detail the process of analyzing derived dimensions and selecting optimal models for both overall and class-specific predictions. Results indicate that dimension reduction enhances predictive power, particularly for less common transport modes, providing a strategy to address class imbalance without modifying data distribution. This methodology deepens understanding of travel behavior, offering valuable insights for modelers and policymakers in developing regions with similar characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. A New Approach to Reduce the Position Data of Moving Objects Using Neural Network.
- Author
-
Diri, Samet and Yildirim, Mehmet
- Subjects
- *
GLOBAL Positioning System , *LOCATION data , *DATA reduction , *DATA quality , *DATA compression - Abstract
ABSTRACT In this study, an ANN‐based Global Navigation Satellite System (GNSS) location data reduction approach was proposed. As GNSS location data becomes more common, efficient data reduction techniques are needed to reduce transmission, storage, and processing costs. This involves selecting key points from the original trajectory to maintain integrity, eliminating redundancy, and lowering transmission and storage expenses. In this study, we proposed a new method for reducing GNSS location data in both online and offline settings, utilizing an ANN trained with a mathematically generated dataset. ANN has not been used in data reduction in the literature. The approach involves training the ANN with a window size of 3 and a threshold value of 10°, followed by using the trained model for data reduction. Experimental results show that the ANN achieves a reduction rate of around 59.18% compared to the original trajectory. Notably, the ANN yields a significantly lower RMSE compared to a mathematical method, particularly in areas requiring precision. Despite the slightly slower computation times, the ANN remains suitable for real‐time applications, demonstrating its efficacy for GNSS location data reduction. Our study highlights its online capability, reasonable reduction rates, and low RMSE values, distinguishing it from existing literature. This method shows potential for scenarios where balancing reduction rates with data quality is crucial. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Multi‐Objective Evolutionary Algorithm Based on Decomposition With Orthogonal Experimental Design.
- Author
-
He, Maowei, Wang, Zhixue, Chen, Hanning, Cao, Yang, and Ma, Lianbo
- Subjects
- *
OPTIMIZATION algorithms , *ORTHOGONAL decompositions , *EXPERIMENTAL design , *DATA reduction , *PROBLEM solving , *EVOLUTIONARY algorithms - Abstract
ABSTRACT Multi‐objective evolutionary optimisation algorithms (MOEAs) have become a widely adopted way of solving the multi‐objective optimisation problems (MOPs). The decomposition‐based MOEAs demonstrate a promising performance for solving regular MOPs. However, when handling the irregular MOPs, the decomposition‐based MOEAs cannot offer a convincing performance because no intersection between weight vector and the Pareto Front (PF) may lead to the same optimal solution assigned to the different weight vectors. To solve this problem, this paper proposes an MOEA based on decomposition with the orthogonal experimental design (MOEA/D‐OED) that involves the selection operation, Orthogonal Experimental Design (OED) operation, and adjustment operation. The selection operation is to judge the unpromising weight vectors based on the history data of relative reduction values and convergence degree. The OED method based on the relative reduction function could make an explicit guidance for removing the worthless weight vectors. The adjustment operation brings in an estimation indicator of both diversity and convergence for adding new weight vectors into the interesting regions. To verify the versatility of the proposed MOEA/D‐OED, 26 test problems with various PFs are evaluated in this paper. Empirical results have demonstrated that the proposed MOEA/D‐OED outperforms eight representative MOEAs on MOPs with various types of PFs, showing promising versatility. The proposed algorithm shows highly competitive performance on all the various MOPs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Functional Projection <italic>K</italic>-means.
- Author
-
Rocci, Roberto and Gattone, Stefano A.
- Subjects
- *
STATISTICAL smoothing , *LEAST squares , *DATA reduction , *FUNCTIONAL analysis , *DATA analysis , *CENTROID - Abstract
AbstractA new technique for simultaneous clustering and dimensionality reduction of functional data is proposed. The observations are projected into a low-dimensional subspace and clustered by means of a functional
K -means. The subspace and the partition are estimated simultaneously by minimizing the within deviance in the reduced space. This allows us to find new dimensions with a very low within deviance, which should correspond to a high level of discriminant power. However, in some cases, the total deviance explained by the new dimensions is so low as to make the subspace, and therefore the partition identified in it, insignificant. To overcome this drawback, we add to the loss a penalty equal to the negative total deviance in the reduced space. In this way, subspaces with a low deviance are avoided. We show how several existing methods are particular cases of our proposal simply by varying the weight of the penalty. The estimation is improved by adding a regularization term to the loss in order to take into account the functional nature of the data by smoothing the centroids. In contrast to existing literature, which largely considers the smoothing as a pre-processing step, in our proposal regularization is integrated with the identification of both subspace and cluster partition. An alternating least squares algorithm is introduced to compute model parameter estimates. The effectiveness of our proposal is demonstrated through its application to both real and simulated data. Supplementary materials for this article are available online. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
40. Police Performance in Police Record Certificate Services at Tomohon Police Station.
- Author
-
Kairupan, Sisca B., Liwe, Pamela, Mamonto, Fitri H., Kambey, Trifena Julia, and Maliangkay, Denny
- Subjects
- *
JOB performance , *POLICE services , *POLICE reports , *PERFORMANCE management , *DATA reduction - Abstract
The main aim of this research is to analyze and describe the performance of the police in SKCK services at Tomohon Police. The approach used is descriptive qualitative. The number of informants in this research was 10 people. Data collection techniques use observation, interviews and documentation. Data analysis through data collection, data reduction, data presentation and confirmation of conclusions. The research results based on the four research focus indicators show that: 1) the SKCK service performance of police employees is not optimal, resulting in slow work of employees; 2) Obstacles that occur, such as if there are many applicants who apply for SKCK, there can be a lot of queues and it takes quite a long time for each person, so it gives the impression that the service is slow and unsatisfactory and there are still many applicants who do not understand the online SKCK registration even though the steps have been explained; 3) There are still police employees who are not disciplined, such as those who are late coming to the office; 4) The public's response regarding SKCK services is good and positive but they still receive constructive complaints, suggestions and complaints. It was concluded that the performance of the SKCK service for police employees was not optimal because there were still problems in the SKCK service. In performance management, it is important for superiors to measure, manage and improve the performance of police employees continuously. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Administration of Insil Baru Village, Passi Timur Sub-district, Bolaang Mangondow Regency.
- Author
-
Mamonto, Fitri, Mokoginta, Sita, Palempung, Leidy Wendy, Kaihatu, Jolanda E., and Rifani, Irfan
- Subjects
- *
DATA reduction , *RESEARCH personnel , *NUMBER theory , *ACQUISITION of data , *DATA analysis - Abstract
This research aims to analyze and describe the administration of the new village government. The research method used is descriptive qualitative. The number of informants in this study was 5 people. Data collection uses observation, interview and documentation techniques. Data analysis through data reduction, data presentation and drawing conclusions. The research results show that based on research conducted by researchers, it can be concluded that not everything can be implemented. Among them are 27 village administrations according to PERMENDAGRI No. 47 of 2016 and it was found that there are 3 village administrations that have not been implemented, namely the development activity book, development results inventory book and activity support cash book. The cause of the obstacles to implementing this administration is the lack of technical guidance from the government districts about how they should manage village administration, then the lack of tools needed to manage village administration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Enhancing dynamic ensemble selection: combining self-generating prototypes and meta-classifier for data classification.
- Author
-
Manastarla, Alberto and Silva, Leandro A.
- Subjects
- *
K-nearest neighbor classification , *CLASSIFICATION algorithms , *DATA distribution , *DATA reduction , *PROTOTYPES - Abstract
In dynamic ensemble selection (DES) techniques, the competence level of each classifier is estimated from a pool of classifiers, and only the most competent ones are selected to classify a specific test sample and predict its class labels. A significant challenge in DES is efficiently estimating classifier competence for accurate prediction, especially when these techniques employ the K-Nearest Neighbors (KNN) algorithm to define the competence region of a test sample based on a validation set (known as the dynamic selection dataset or DSEL). This challenge is exacerbated when the DSEL does not accurately reflect the original data distribution or contains noisy data. Such conditions can reduce the precision of the system, induce unexpected behaviors, and compromise stability. To address these issues, this paper introduces the self-generating prototype ensemble selection (SGP.DES) framework, which combines meta-learning with prototype selection. The proposed meta-classifier of SGP.DES supports multiple classification algorithms and utilizes meta-features from prototypes derived from the original training set, enhancing the selection of the best classifiers for a test sample. The method improves the efficiency of KNN in defining competence regions by generating a reduced and noise-free DSEL set that preserves the original data distribution. Furthermore, the SGP.DES framework facilitates tailored optimization for specific classification challenges through the use of hyperparameters that control prototype selection and the meta-classifier operation mode to select the most appropriate classification algorithm for dynamic selection. Empirical evaluations of twenty-four classification problems have demonstrated that SGP.DES outperforms state-of-the-art DES methods as well as traditional single-model and ensemble methods in terms of accuracy, confirming its effectiveness across a wide range of classification contexts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. A tied-weight autoencoder for the linear dimensionality reduction of sample data.
- Author
-
Kim, Sunhee, Chu, Sang-Ho, Park, Yong-Jin, and Lee, Chang-Yong
- Subjects
- *
DATA reduction , *DATA science , *MACHINE learning , *BEST practices , *CLASSIFICATION - Abstract
Dimensionality reduction is a method used in machine learning and data science to reduce the dimensions in a dataset. While linear methods are generally less effective at dimensionality reduction than nonlinear methods, they can provide a linear relationship between the original data and the dimensionality-reduced representation, leading to better interpretability. In this research, we present a tied-weight autoencoder as a dimensionality reduction model with the merit of both linear and nonlinear methods. Although the tied-weight autoencoder is a nonlinear dimensionality reduction model, we approximate it to function as a linear model. This is achieved by removing the hidden layer units that are largely inactivated by the input data, while preserving the model's effectiveness. We evaluate the proposed model by comparing its performance with other linear and nonlinear models using benchmark datasets. Our results show that the proposed model performs comparably to the nonlinear model of a similar autoencoder structure to the proposed model. More importantly, we show that the proposed model outperforms the linear models in various metrics, including the mean square error, data reconstruction, and the classification of low-dimensional projections of the input data. Thus, our study provides general recommendations for best practices in dimensionality reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. AI‐Based Segmentation for Corrosion Risk Prediction in Carline Designs.
- Author
-
Gollé‐Leidreiter, Marcel, Kapfer, Konstantin, Mittelbach, Andreas, and Kowalski, Julia
- Subjects
- *
DATA reduction , *ADHESIVES , *TRIANGLES , *FLANGES , *SIMULATION methods & models - Abstract
ABSTRACT With shortening development cycles, digital development rises in importance. Therefore, we present an approach to segment digital carlines based on their corrosion risk using digital data sets. A typical carline is described via STL‐Files that accumulate to approximately 25 million triangles. While physics‐based simulation models exist capable of predicting the onset of corrosion in local geometry settings, an application to a complex surface mesh would be prohibitively expensive to compute. This calls for data reduction techniques that reduce the number of triangles by identifying areas prone to corrosion, as well as those that are protected by measures such as adhesives. Therefore, the implementation of corrosion‐prone areas of a body‐in‐white part, as presented in Waibel et al., is extended by implementing a design measure segmentation for adhesive pipes. This work introduces a method to predict corrosion‐protected areas for flanges with adhesives by implementing a feature extraction and a transfer learning‐based workflow for data‐efficient training. The results of the implementation are the validated prediction on body‐in‐white parts with high precision. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Counting regulations and measuring regulatory impact: a call for nuance.
- Author
-
Shapiro, Stuart
- Subjects
LANGUAGE models ,DATA reduction ,ECONOMIC impact ,COUNTING - Abstract
The effect of regulation on virtually every aspect of the lives of US citizens has led to an understandable impulse to measure this total impact. It has led to various attempts to count the total number of regulations and regulatory requirements, and to total the costs and benefits of regulation. And these counting mechanisms have played prominent roles in discussions over statutory changes designed to reform the process by which we write regulations. However, counting regulations in a meaningful way and measuring their cumulative economic impact is an astonishingly difficult task. For this reason, there have been a wide variety of methods that scholars and advocates have employed in the effort to do so. This article is an attempt to catalog the most prominent methods of counting regulations and measuring regulatory impact in the United States, describe their strengths and weaknesses, and suggest alternative approaches to attack this important question. We suggest both using large language models and detailed analysis of Paperwork Reduction Act data and, at the opposite extreme, doing more qualitative work on the consequences of regulation on individuals, firms, and industries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. K-均值算法的初始化改进与聚类质量评估.
- Author
-
何选森, 何帆, and 于海澜
- Subjects
- *
K-means clustering , *PRINCIPAL components analysis , *CENTROID , *DATA reduction , *PROBLEM solving - Abstract
In order to solve the problem of random initialization of K-means algorithm, an improved scheme was proposed. By standardizing the features of data and using principal component analysis (PCA), data dimensionality reduction was achieved. The initial centroids of the algorithm were deter- mined by the farthest centroid and the min-max distance rule. To obtain the inherent number of clusters in the data, empirical rules and elbow method were used, and silhouette analysis was used to evaluate the clustering quality. The simulation results show that the average A test statistic of other algorithms is 2. 72 times that of this scheme, and the improved clustering error is reduced by 6.04%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Accelerating aerodynamic simulations with a hybrid fine-tuned deep learning model.
- Author
-
Li, Jiahui, Zhang, Xiaoya, Peng, Wei, Liu, Xu, Wang, Wenhui, and Yao, Wen
- Subjects
- *
COMPUTATIONAL fluid dynamics , *DEEP learning , *DATA reduction , *HYBRID computer simulation , *AEROFOILS - Abstract
High-fidelity computational fluid dynamics simulations play an essential role in predicting complex aerodynamic flow fields, but their employment are hindered due to the high computational burdens involving fine spatial discretizations. While recent data-driven methods offer a promising avenue for performance improvements, they often face challenges related to excessive reliance on labeled data and insufficient accuracy. Consequently, we propose a novel hybrid model, which integrates a deep learning model into the fluid simulation workflow, harnessing the predictive capabilities to accelerate the fluid simulations. The acceleration is performed by a coarse-to-fine flow field mapping. To mitigate over-reliance on labeled data, the model is first pre-trained using pseudo-labeled data and then fine-tuned with a new designed attention mechanism. Acceleration efficiency of the hybrid model is demonstrated through two cases: aerodynamic simulations of an airfoil and a spherical blunt cone under varied operating conditions. Numerical experiments reveal that the proposed model achieves a substantial reduction in labeled data as well as prediction accuracy improvement, in comparison with traditional data-driven methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. 2D BAO vs. 3D BAO: Solving the Hubble Tension with Bimetric Cosmology.
- Author
-
Dwivedi, Sowmaydeep and Högås, Marcus
- Subjects
- *
COSMIC background radiation , *TYPE I supernovae , *COSMOLOGICAL constant , *DATA reduction , *IMAGINARY histories , *HUBBLE constant - Abstract
Ordinary 3D Baryon Acoustic Oscillations (BAO) data are model-dependent, requiring the assumption of a cosmological model to calculate comoving distances during data reduction. Throughout the present-day literature, the assumed model is Λ CDM. However, it has been pointed out in several recent works that this assumption can be inadequate when analyzing alternative cosmologies, potentially biasing the Hubble constant ( H 0 ) low, thus contributing to the Hubble tension. To address this issue, 3D BAO data can be replaced with 2D BAO data, which are only weakly model-dependent. The impact of using 2D BAO data, in combination with alternative cosmological models beyond Λ CDM, has been explored for several phenomenological models, showing a promising reduction in the Hubble tension. In this work, we accommodate these models in the theoretically robust framework of bimetric gravity. This is a modified theory of gravity that exhibits a transition from a (possibly) negative cosmological constant in the early universe to a positive one in the late universe. By combining 2D BAO data with cosmic microwave background and type Ia supernovae data, we find that the inverse distance ladder in this theory yields a Hubble constant of H 0 = (71.0 ± 0.9) km / s / Mpc , consistent with the SH0ES local distance ladder measurement of H 0 = (73.0 ± 1.0) km / s / Mpc . Replacing 2D BAO with 3D BAO results in H 0 = (68.6 ± 0.5) km / s / Mpc from the inverse distance ladder. We conclude that the choice of BAO data significantly impacts the Hubble tension, with ordinary 3D BAO data exacerbating the tension, while 2D BAO data provide results consistent with the local distance ladder. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. nsDCC: dual-level contrastive clustering with nonuniform sampling for scRNA-seq data analysis.
- Author
-
Wang, Linjie, Li, Wei, Zhou, Fanghui, Yu, Kun, Feng, Chaolu, and Zhao, Dazhe
- Subjects
- *
IRREGULAR sampling (Signal processing) , *GENE expression , *RNA sequencing , *DATA reduction , *DATA analysis - Abstract
Dimensionality reduction and clustering are crucial tasks in single-cell RNA sequencing (scRNA-seq) data analysis, treated independently in the current process, hindering their mutual benefits. The latest methods jointly optimize these tasks through deep clustering. However, contrastive learning, with powerful representation capability, can bridge the gap that common deep clustering methods face, which requires pre-defined cluster centers. Therefore, a dual-level contrastive clustering method with nonuniform sampling (nsDCC) is proposed for scRNA-seq data analysis. Dual-level contrastive clustering, which combines instance-level contrast and cluster-level contrast, jointly optimizes dimensionality reduction and clustering. Multi-positive contrastive learning and unit matrix constraint are introduced in instance- and cluster-level contrast, respectively. Furthermore, the attention mechanism is introduced to capture inter-cellular information, which is beneficial for clustering. The nsDCC focuses on important samples at category boundaries and in minority categories by the proposed nearest boundary sparsest density weight assignment algorithm, making it capable of capturing comprehensive characteristics against imbalanced datasets. Experimental results show that nsDCC outperforms the six other state-of-the-art methods on both real and simulated scRNA-seq data, validating its performance on dimensionality reduction and clustering of scRNA-seq data, especially for imbalanced data. Simulation experiments demonstrate that nsDCC is insensitive to "dropout events" in scRNA-seq. Finally, cluster differential expressed gene analysis confirms the meaningfulness of results from nsDCC. In summary, nsDCC is a new way of analyzing and understanding scRNA-seq data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. A novel in situ methodology for U–Pb, (U–Th)/He, and fission track triple dating.
- Author
-
Hu, Jie, Li, Zhiwu, Li, Jinxi, Liu, Shugen, Xu, Ganqing, Yang, Chaoqun, Tong, Kui, and Li, Yin
- Subjects
- *
FISSION track dating , *SINGLE crystals , *SPATIAL resolution , *DATA reduction , *LASER ablation inductively coupled plasma mass spectrometry - Abstract
In this study, we present a novel laser-based technique for in situ U–Pb, (U–Th)/He, and fission track (FT) triple dating for apatite and other U-rich accessory minerals. This approach allows for obtaining three ages with different closure temperatures on a single crystal, significantly enhancing spatial resolution and analytical productivity. Our new workflow employs an Autoscan System for FT analysis, a ResoChron system for He determination, and an LA-ICP-MS system for parent isotope and U–Pb age measurements. We describe the sample preparation, instrument parameters, and data reduction for apatite in detail. Using a sample–standard bracketing approach to determine the pairwise factor κ, we significantly improved reproducibility, particularly by replacing volume with depth in helium pit measurement. Additionally, we found that chemical etching in FT dating can reduce (U–Th)/He age by removing the surrounding He, necessitating a second polishing for samples with high track density. This in situ triple dating method has been successfully applied to five large apatite crystals, improving analytical efficiency and resolving grain-to-grain age discrepancy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.