6,920 results on '"sparsity"'
Search Results
2. Enhancing Recommender Systems through Imputation and Social-Aware Graph Convolutional Neural Network
- Author
-
Faroughi, Azadeh, Moradi, Parham, and Jalili, Mahdi
- Published
- 2025
- Full Text
- View/download PDF
3. High-dimensional copula-based Wasserstein dependence
- Author
-
De Keyser, Steven and Gijbels, Irène
- Published
- 2025
- Full Text
- View/download PDF
4. Half-closed discontinuous Galerkin discretisations
- Author
-
Pan, Y. and Persson, P.-O.
- Published
- 2025
- Full Text
- View/download PDF
5. Damage identification in plate-like structures using frequency-coupled [formula omitted]-based sparse estimation
- Author
-
Dwek, Nathan, Dimopoulos, Vasileios, Janssens, Dennis, Kirchner, Matteo, Deckers, Elke, and Naets, Frank
- Published
- 2025
- Full Text
- View/download PDF
6. Generalized sparse and outlier-robust broad learning systems for multi-dimensional output problems
- Author
-
Zhang, Yuao, Dai, Yunwei, Ke, Shuya, Wu, Qingbiao, and Li, Jing
- Published
- 2024
- Full Text
- View/download PDF
7. True sparse PCA for reducing the number of essential sensors in virtual metrology.
- Author
-
Xie, Yifan, Wang, Tianhui, Jeong, Young-Seon, Tosyali, Ali, and Jeong, Myong K.
- Subjects
PRINCIPAL components analysis ,DETECTORS ,METROLOGY ,SEMICONDUCTOR industry - Abstract
In the semiconductor industry, virtual metrology (VM) is a cost-effective and efficient technique for monitoring the processes from one wafer to another. This technique is implemented by generating a predictive model that uses real-time data from equipment sensors in conjunction with measured wafer quality characteristics. Before establishing a prediction model for the VM system, appropriate selection of relevant input variables should be performed to maintain the efficiency of subsequent analyses considering the large dimensionality of the sensor data inputs. However, wafer production processes usually employ multiple sensors, which leads to cost escalations. Herein, we propose a variant of the sparse principal component analysis (PCA) called true sparse PCA (TSPCA). The proposed method uses a small number of input variables in the first few principal components. The main contribution of the proposed TSPCA is reducing the number of essential sensors. Our experimental results demonstrate that compared to the existing sparse PCA methods, the proposed approach can reduce the number of sensors required while explaining an approximately equivalent amount of variance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. L -Penalized Membership in Sparse Fuzzy Clustering
- Author
-
Ferraro, Maria Brigida, Forti, Marco, Giordani, Paolo, Pollice, Alessio, editor, and Mariani, Paolo, editor
- Published
- 2025
- Full Text
- View/download PDF
9. Regularisation
- Author
-
Reilly, James and Reilly, James
- Published
- 2025
- Full Text
- View/download PDF
10. A Recursive Learning Algorithm for the Least Squares SVM
- Author
-
Xia, Xiao-Lei, Ouyang, Mingxing, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Hadfi, Rafik, editor, Anthony, Patricia, editor, Sharma, Alok, editor, Ito, Takayuki, editor, and Bai, Quan, editor
- Published
- 2025
- Full Text
- View/download PDF
11. Chapter Six - Compressive sensing technique for 3D medical image compression
- Author
-
Upadhyaya, Vivek and Gupta, Nand Kishor
- Published
- 2025
- Full Text
- View/download PDF
12. Regulation of dentate gyrus pattern separation by hilus ectopic granule cells.
- Author
-
Yin, Haibin, Sun, Xiaojuan, Yang, Kai, Lan, Yueheng, and Lu, Zeying
- Abstract
The dentate gyrus (DG) in hippocampus is reported to perform pattern separation, converting similar inputs into different outputs and thus avoiding memory interference. Previous studies have found that human and mice with epilepsy have significant pattern separation defects and a portion of adult-born granule cells (abGCs) migrate abnormally into the hilus, forming hilus ectopic granule cells (HEGCs). For the lack of relevant pathophysiological experiments, how HEGCs affect pattern separation remains unclear. Therefore, in this paper, we will construct the DG neuronal circuit and focus on discussing effects of HEGCs on pattern separation numerically. The obtained results showed that HEGCs impaired pattern separation efficiency since the sparse firing of granule cells (GCs) was destroyed. We provided new insights into the underlining mechanisms of HEGCs impairing pattern separation through analyzing two excitatory circuits: GC-HEGC-GC and GC-Mossy cell (MC)-GC, both of which involve the participation of HEGCs within the DG. It is revealed that the recurrent excitatory circuit GC-HEGC-GC formed by HEGCs mossy fiber sprouting significantly enhanced GCs activity, consequently disrupted pattern separation. However, another excitatory circuit had negligible effects on pattern separation due to the direct and indirect influences of MCs on GCs, which in turn led to the GCs sparse firing. Thus, HEGCs impair DG pattern separation mainly through the GC-HEGC-GC circuit and therefore ablating HEGCs may be one of the effective ways to improve pattern separation in patients with epilepsy. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
13. Shearlet-based Regularization with an Application to Limited Data X-ray Tomography.
- Author
-
Purisha, Zenith, Solekhudin, Imam, and Sumardi
- Subjects
- *
COMPUTED tomography , *CONSTRAINED optimization , *DIAGNOSTIC imaging , *X-rays , *TOMOGRAPHY , *IMAGE denoising - Abstract
Numerous constrained optimization methods have been suggested to reduce the X-ray dose in computerized tomography. These approaches focus on minimizing a regularizing function, which gauges the deviation from prior knowledge about the imaged object. These approaches focus on minimizing a regularizing function that assesses the lack of consistency of about the object that is being imaged using some prior knowledge. This minimization is conducted under the condition of maintaining a predetermined level of consistency with the detected X-ray attenuation. Total variation (TV) is a frequently explored regularizing function. TV minimization techniques exhibit excellent denoising performance for simple images, yet they lead to the loss of texture information when employed on more complex images. Given that medical imaging frequently involves textured images, utilizing TV may not be advantageous in such scenarios. Alternative studies propose incorporating multi-scale geometric transforms into the regularization function. One recent preference in this regard is the adoption of the shearlets transform. This work presents a proof-of-concept that showcases the application of the discrete shearlets transform as a sparsifying transform in the computed tomography reconstruction solver. Specifically, the regularization term utilized is the l1-norm of the shearlets coefficients. In this work, the algorithm's iterative computation incorporates an operation on the shearlets coefficients. Particularly, the soft-thresholding operator is used with the parameter adaptively chosen. To improve its relevance for biomedical imaging, we propose a desired sparsity level of the thresholding parameter value obtained from a biological object. The effectiveness of the proposed method is assessed using two different types of data: data from chest dataset generated by MATLAB and real data collected from X-ray tomographic measurements of an axial slice of a ladybug. [ABSTRACT FROM AUTHOR]
- Published
- 2025
14. Generalized Fused Lasso for Treatment Pooling in Network Meta‐Analysis.
- Author
-
Kong, Xiangshan, Daly, Caitlin H., and Béliveau, Audrey
- Subjects
- *
MATRIX decomposition , *VECTOR data , *MULTIPLE comparisons (Statistics) , *LEAST squares , *PARSIMONIOUS models - Abstract
This work develops a generalized fused lasso (GFL) approach to fitting contrast‐based network meta‐analysis (NMA) models. The GFL method penalizes all pairwise differences between treatment effects, resulting in the pooling of treatments that are not sufficiently different. This approach offers an intriguing avenue for potentially mitigating biases in treatment rankings and reducing sparsity in networks. To fit contrast‐based NMA models within the GFL framework, we formulate the models as generalized least squares problems, where the precision matrix depends on the standard error in the data, the estimated between‐study heterogeneity and the correlation between contrasts in multi‐arm studies. By utilizing a Cholesky decomposition of the precision matrix, we linearly transform the data vector and design matrix to frame NMA within the GFL framework. We demonstrate how to construct the GFL penalty such that every pairwise difference is penalized similarly. The model is straightforward to implement in R via the "genlasso" package, and runs instantaneously, contrary to other regularization approaches that are Bayesian. A two‐step GFL‐NMA approach is recommended to obtain measures of uncertainty associated with the (pooled) relative treatment effects. Two simulation studies confirm the GFL approach's ability to pool treatments that have the same (or similar) effects while also revealing when incorrect pooling may occur, and its potential benefits against alternative methods. The novel GFL‐NMA method is successfully applied to a real‐world dataset on diabetes where the standard NMA model was not favored compared to the best‐fitting GFL‐NMA model with AICc selection of the tuning parameter (ΔAICc>13)$$ \Delta AICc>13\Big) $$. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Multiscale local polynomial density estimation.
- Author
-
Jansen, Maarten
- Subjects
- *
POLYNOMIALS , *DENSITY - Abstract
Abstract.The multiscale local polynomial transform (MLPT) is a combination of a kernel method for non parametric regression or density estimation with a projection onto a basis in a multiscale framework. The MLPT is proposed for the estimation of densities with possibly one or more singular points at unknown locations. The proposed estimator reformulates the density estimation problem as a high-dimensional, sparse regression problem with asymptotically exponential response variables. The covariates in this model are the observations from the unknown density themselves. The design matrix comes from a novel extension of the MLPT for use on highly non equidistant data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Concentration of a Sparse Bayesian Model With Horseshoe Prior in Estimating High‐Dimensional Precision Matrix.
- Author
-
Mai, The Tien
- Abstract
Precision matrices are crucial in many fields such as social networks, neuroscience and economics, representing the edge structure of Gaussian graphical models (GGMs), where a zero in an off‐diagonal position of the precision matrix indicates conditional independence between nodes. In high‐dimensional settings where the dimension of the precision matrix p$$ p $$ exceeds the sample size n$$ n $$ and the matrix is sparse, methods like graphical Lasso, graphical SCAD and CLIME are popular for estimating GGMs. While frequentist methods are well‐studied, Bayesian approaches for (unstructured) sparse precision matrices are less explored. The graphical horseshoe estimate, applying the global‐local horseshoe prior, shows superior empirical performance, but theoretical work for sparse precision matrix estimations using shrinkage priors is limited. This paper addresses these gaps by providing concentration results for the tempered posterior with the fully specified horseshoe prior in high‐dimensional settings. Moreover, we also provide novel theoretical results for model misspecification, offering a general oracle inequality for the posterior. A concise set of simulations is performed to validate our theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. ReLU, Sparseness, and the Encoding of Optic Flow in Neural Networks.
- Author
-
Layton, Oliver W., Peng, Siyuan, and Steinmetz, Scott T.
- Abstract
Accurate self-motion estimation is critical for various navigational tasks in mobile robotics. Optic flow provides a means to estimate self-motion using a camera sensor and is particularly valuable in GPS- and radio-denied environments. The present study investigates the influence of different activation functions—ReLU, leaky ReLU, GELU, and Mish—on the accuracy, robustness, and encoding properties of convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs) trained to estimate self-motion from optic flow. Our results demonstrate that networks with ReLU and leaky ReLU activation functions not only achieved superior accuracy in self-motion estimation from novel optic flow patterns but also exhibited greater robustness under challenging conditions. The advantages offered by ReLU and leaky ReLU may stem from their ability to induce sparser representations than GELU and Mish do. Our work characterizes the encoding of optic flow in neural networks and highlights how the sparseness induced by ReLU may enhance robust and accurate self-motion estimation from optic flow. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Sparse Independent Component Analysis with an Application to Cortical Surface fMRI Data in Autism.
- Author
-
Wang, Zihang, Gaynanova, Irina, Aravkin, Aleksandr, and Risk, Benjamin B.
- Subjects
- *
SCHOOL children , *NONSMOOTH optimization , *AUTISM in children , *FUNCTIONAL connectivity , *SMOOTHNESS of functions , *AUTISTIC children - Abstract
Independent component analysis (ICA) is widely used to estimate spatial resting-state networks and their time courses in neuroimaging studies. It is thought that independent components correspond to sparse patterns of co-activating brain locations. Previous approaches for introducing sparsity to ICA replace the non-smooth objective function with smooth approximations, resulting in components that do not achieve exact zeros. We propose a novel Sparse ICA method that enables sparse estimation of independent source components by solving a non-smooth non-convex optimization problem via the relax-and-split framework. The proposed Sparse ICA method balances statistical independence and sparsity simultaneously and is computationally fast. In simulations, we demonstrate improved estimation accuracy of both source signals and signal time courses compared to existing approaches. We apply our Sparse ICA to cortical surface resting-state fMRI in school-aged autistic children. Our analysis reveals differences in brain activity between certain regions in autistic children compared to children without autism. Sparse ICA selects coactivating locations, which we argue is more interpretable than dense components from popular approaches. Sparse ICA is fast and easy to apply to big data. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Tracking Analysis of the ℓ0-LMS Algorithm.
- Author
-
da Silva, Lucas Paiva R., de Barros, Ana L. Ferreira, Pinto, Milena Faria, Oliveira, Fernanda D. V. R., and Haddad, Diego B.
- Subjects
- *
COEFFICIENTS (Statistics) , *ADAPTIVE filters , *IMPULSE response , *STOCHASTIC models , *ALGORITHMS - Abstract
One of the main challenges in using adaptive filtering algorithms is efficiently emulating a system subject to noisy disturbances. This can be facilitated in applications where the system response to impulse is sparse, which allows for acceleration of the convergence rate if appropriate strategies are used. As a result, methods that impose norm constraints on the estimates are widely used. However, in the case of non-stationary plants to be identified, there is a gap in terms of theoretical performance guarantees of these algorithms. This paper proposes a novel stochastic model capable of predicting the performance of the ℓ 0 -LMS algorithm in identifying a plant subjected to a first-order Markovian disturbance. Therefore, a tracking analysis is carried out, including both the average performance of the adaptive coefficients and second-order statistics of these coefficients. The theoretical model offers an analytical equation that predicts the asymptotic mean squared deviation in terms of the variance of the Markovian disturbance. Further, for most simulated scenarios, the theoretical model's error in mean squared deviation remains below 1 dB, even when the learning step varies across several orders of magnitude. It was possible to observe that the theoretical model can accurately predict the steady-state regime for a wide range of learning step values and calculate an optimal value for this parameter. The findings are confirmed through extensive simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. LightCardiacNet: light-weight deep ensemble network with attention mechanism for cardiac sound classification.
- Author
-
K. V., Suma, Koppad, Deepali B., Raghavan, Dharini, and P. R., Manjunath
- Subjects
LONG short-term memory ,ENSEMBLE learning ,DATA augmentation ,MEDICAL personnel ,HEART sounds - Abstract
Cardiovascular diseases (CVDs) account for about 32% of global deaths. While digital stethoscopes can record heart sounds, expert analysis is often lacking. To address this, we propose LightCardiacNet, an interpretable, lightweight ensemble neural network using Bi-Directional Gated Recurrent Units (Bi-GRU). It is trained on the PASCAL Heart Challenge and CirCor DigiScope datasets. Static network pruning enhances model sparsity for real-time deployment. We employ various data augmentation techniques to improve resilience to background noise. An ensemble of the two networks is constructed by employing a weighted average approach that combines the two light-weight attention Bi-GRU networks trained on different datasets, which outperforms several state-of-the-art networks achieving an accuracy of 99.8%, specificity of 99.6%, sensitivity of 95.2%, ROC-AUC of 0.974 and inference time of 17 ms on the PASCAL dataset, accuracy of 98.5%, specificity of 95.1%, sensitivity of 90.9%, ROC-AUC of 0.961 and inference time of 18 ms on the CirCor dataset, and an accuracy of 96.21%, sensitivity of 92.78%, specificity of 93.16%, ROC-AUC of 0.913 and inference time of 17.5 ms on real-world data. We adopt the SHAP algorithm to incorporate model interpretability and provide insights to make it clinically explainable and useful to healthcare professionals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Lazy learning and sparsity handling in recommendation systems.
- Author
-
Mishra, Suryanshi, Singh, Tinku, Kumar, Manish, and Satakshi
- Subjects
MACHINE learning ,MATRIX decomposition ,RECOMMENDER systems ,LAZINESS ,MATRICES (Mathematics) ,ALGORITHMS - Abstract
Recommendation systems are ubiquitous in various domains, facilitating users in finding relevant items according to their preferences. Identifying pertinent items that meet their preferences enables users to target the right items. To predict ratings for more accurate forecasts, recommender systems often use collaborative filtering (CF) approaches to sparse user-rated item matrices. Due to a lack of knowledge regarding newly formed entities, the data sparsity of the user-rated item matrix has an enormous effect on collaborative filtering algorithms, which frequently face lazy learning issues. Real-world datasets with exponentially increasing users and reviews make this situation worse. Matrix factorization (MF) stands out as a key strategy in recommender systems, especially for CF tasks. This paper presents a neural network matrix factorization (NNMF) model through machine learning to overcome data sparsity challenges. This approach aims to enhance recommendation quality while mitigating the impact of data sparsity, a common issue in CF algorithms. A thorough comparative analysis was conducted on the well-known MovieLens dataset, spanning from 1.6 to 9.6 M records. The outcomes consistently favored the NNMF algorithm, showcasing superior performance compared to the state-of-the-art method in this domain in terms of precision, recall, F 1 score , MAE, and RMSE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Carbon Emission Optimization for Assembled Buildings Using Interval Grey GERT Modelling and Modify NSGA-III Algorithm in China.
- Author
-
Song, Hong, Chen, Xiaoxiao, and Wang, Heping
- Abstract
Developing prefabricated buildings is an effective means to reduce carbon emissions in the construction industry. Currently, research on project management of prefabricated buildings mainly focuses on multi-objective optimization of construction period cost quality issues, with little consideration given to the important environmental factor of carbon emissions. In this article, we propose a comprehensive optimization objective involving the carbon emissions, duration, cost and quality level of projects. Then, an interval grey GERT network is used to establish a multi-objective joint optimization model for the green construction of assembled buildings, and the modelling problem is solved with the modified NSGA-III algorithm based on a local search approach with sparsity. Taking the affordable housing project on the north side of Shangfang in Nanjing as an example, compared with the original contract, it is shown that the improved NSGA-III algorithm can shorten the total construction period by 17.52%, reduce the total cost by 15.24%, increase the total quality level by 8.89%, and reduce carbon emissions by 33.64%. The establishment of a multi-objective joint optimization model and its solving algorithm for green construction in prefabricated building projects provides more specific guidance for green construction in uncertain environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Mixing Support Detection-Based Alternating Direction Method of Multipliers for Sparse Hyperspectral Image Unmixing.
- Author
-
Huang, Jie, Liang, Shuang, and Deng, Liang-Jian
- Abstract
Spectral unmixing is important in analyzing and processing hyperspectral images (HSIs). With the availability of large spectral signature libraries, the main task of spectral unmixing is to estimate corresponding proportions called abundances of pure spectral signatures called endmembers in mixed pixels. In this vein, only a few endmembers participate in the formation of mixed pixels in the scene and so we call them active endmembers. A plethora of sparse unmixing algorithms exploit spectral and spatial information in HSIs to enhance abundance estimation results. Many algorithms, however, treat the abundances corresponding to active and nonactive endmembers in the scene equivalently. In this article, we propose a framework named mixing support detection (MSD) for the spectral unmixing problem. The main idea is first to detect the active and nonactive endmembers at each iteration and then to treat the corresponding abundances differently. It follows that we only focus on the estimation of active abundances with the assumption of zero abundances corresponding to nonactive endmembers. It can be expected to reduce the computational cost, avoid the perturbations in nonactive abundances, and enhance the sparsity of the abundances. We embed the MSD framework in classic alternating direction method of multipliers (ADMM) updates and obtain an ADMM-MSD algorithm. In particular, five ADMM-MSD-based unmixing algorithms are provided. The residual and objective convergence results of the proposed algorithm are given under certain assumptions. Both simulated and real-data experiments demonstrate the efficacy and superiority of the proposed algorithm compared with some state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. High-Dimensional Knockoffs Inference for Time Series Data *.
- Author
-
Chi, Chien-Ming, Fan, Yingying, Ing, Ching-Kang, and Lv, Jinchi
- Subjects
- *
TIME series analysis , *FORECASTING , *PRICE inflation - Abstract
AbstractWe make some initial attempt to establish the theoretical and methodological foundation for the model-X knockoffs inference for time series data. We suggest the method of time series knockoffs inference (TSKI) by exploiting the ideas of subsampling and e-values to address the difficulty caused by the serial dependence. We also generalize the robust knockoffs inference in [4] to the time series setting to relax the assumption of known covariate distribution required by model-X knockoffs, since such an assumption is overly stringent for time series data. We establish sufficient conditions under which TSKI achieves the asymptotic false discovery rate (FDR) control. Our technical analysis reveals the effects of serial dependence and unknown covariate distribution on the FDR control. We conduct a power analysis of TSKI using the Lasso coefficient difference knockoff statistic under the generalized linear time series models. The finite-sample performance of TSKI is illustrated with several simulation examples and an economic inflation study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. DualPFL: A Dual Sparse Pruning Method with Efficient Federated Learning for Edge-Based Object Detection.
- Author
-
Song, Shijin, Du, Sen, Song, Yuefeng, and Zhu, Yongxin
- Subjects
FEDERATED learning ,ARTIFICIAL neural networks ,MACHINE learning ,TRAFFIC signs & signals ,SMART cities - Abstract
With the increasing complexity of neural network models, the huge communication overhead in federated learning (FL) has become a significant issue. To mitigate resource consumption, incorporating pruning algorithms into federated learning has emerged as a promising approach. However, existing pruning algorithms exhibit high sensitivity to network architectures and typically require multiple sessions of retraining to identify optimal structures. The direct application of such strategies to FL would inevitably introduce an additional communication cost. To this end, we propose a novel communication-efficient federated learning framework, DualPFL (Dual Sparse Pruning Federated Learning), designed to address these issues by implementing dynamic sparse pruning and adaptive model aggregation strategies. The experimental results demonstrate that, compared to similar works, our framework can improve convergence speed by more than two times under non-IID data, achieving up to 84% accuracy on the CIFAR-10 dataset, 95% mean average precision (mAP) on the COCO dataset using YOLOv8, and 96% accuracy on the TT100K traffic sign datasets. These findings indicate that DualPFL facilitates secure and efficient collaborative computing in smart city applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. High‐dimensional sparse classification using exponential weighting with empirical hinge loss.
- Author
-
Mai, The Tien
- Subjects
- *
HINGES , *CLASSIFICATION , *EMPLOYMENT , *FORECASTING - Abstract
In this study, we address the problem of high‐dimensional binary classification. Our proposed solution involves employing an aggregation technique founded on exponential weights and empirical hinge loss. Through the employment of a suitable sparsity‐inducing prior distribution, we demonstrate that our method yields favorable theoretical results on prediction error. The efficiency of our procedure is achieved through the utilization of Langevin Monte Carlo, a gradient‐based sampling approach. To illustrate the effectiveness of our approach, we conduct comparisons with the logistic Lasso on simulated data and a real dataset. Our method frequently demonstrates superior performance compared to the logistic Lasso. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Objective quantification of sound sensory attributes in side-by-side vehicles using multiple linear regression models.
- Author
-
Benghanem, Abdelghani, Valentin, Olivier, Gauthier, Philippe-Aubert, and Berry, Alain
- Subjects
AUDIO acoustics ,RECREATIONAL vehicles ,MATHEMATICAL models ,REGRESSION analysis ,ALGORITHMS - Abstract
The evaluation of sound quality is a pivotal area of research within audio and acoustics. The sound quality evaluation methods commonly used include both objective and subjective, the latter being time-consuming and costly as they rely on listening tests. This research work aims to investigate the use of predictive sound quality models as a way to objectively assess the Desire-to-buy of side-byside vehicles, in a more efficient, faster, and less costly way than conventional methods. Multiple linear regression algorithms were used to validate the objective models derived from objective physical metrics and perceptual psycho-physical metrics. The sensory profile objective models reported in this paper were constructed using parsimonious linear Lasso and Elastic-net algorithms. Our results show that linear objective models effectively account for each of the perceptual attributes of the sensory profiles and the Desire-to-buy, while only requiring a few physical and psychophysical metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Enhanced Collaborative Filtering: Combining Autoencoder and Opposite User Inference to Solve Sparsity and Gray Sheep Issues.
- Author
-
El Youbi El Idrissi, Lamyae, Akharraz, Ismail, El Ouaazizi, Aziza, and Ahaitouf, Abdelaziz
- Subjects
STANDARD deviations ,FEATURE extraction ,DEEP learning ,DATA reduction ,RECOMMENDER systems ,SHEEP - Abstract
In recent years, the study of recommendation systems has become crucial, capturing the interest of scientists and academics worldwide. Music, books, movies, news, conferences, courses, and learning materials are some examples of using the recommender system. Among the various strategies employed, collaborative filtering stands out as one of the most common and effective approaches. This method identifies similar active users to make item recommendations. However, collaborative filtering has two major challenges: sparsity and gray sheep. Inspired by the remarkable success of deep learning across a multitude of application areas, we have integrated deep learning techniques into our proposed method to effectively address the aforementioned challenges. In this paper, we present a new method called Enriched_AE, focused on autoencoder, a well-regarded unsupervised deep learning technique renowned for its superior ability in data dimensionality reduction, feature extraction, and data reconstruction, with an augmented rating matrix. This matrix not only includes real users but also incorporates virtual users inferred from opposing ratings given by real users. By doing so, we aim to enhance the accuracy of predictions, thus enabling more effective recommendation generation. Through experimental analysis of the MovieLens 100K dataset, we observe that our method achieves notable reductions in both RMSE (Root Mean Squared Error) and MAE (Mean Absolute Error), underscoring its superiority over the state-of-the-art collaborative filtering models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. BAYESIAN HIERARCHICAL MODELING AND ANALYSIS FOR ACTIGRAPH DATA FROM WEARABLE DEVICES.
- Author
-
Di Loro, Pierfrancesco Alaimo, Mingione, Marco, Lipsitt, Jonah, Batteate, Christina M, Jerrett, Michael, and Banerjee, Sudipto
- Subjects
Mathematical Sciences ,Statistics ,Physical Activity ,Prevention ,Bioengineering ,Cardiovascular ,Bayesian hierarchical models ,directed acyclic graph ,Gaussian processes ,physical activity ,sparsity ,spatial-temporal statistics ,Directed acyclic graph ,Physical activity ,Sparsity ,Spatial-temporal statistics ,Econometrics ,Statistics & Probability - Abstract
The majority of Americans fail to achieve recommended levels of physical activity, which leads to numerous preventable health problems such as diabetes, hypertension, and heart diseases. This has generated substantial interest in monitoring human activity to gear interventions toward environmental features that may relate to higher physical activity. Wearable devices, such as wrist-worn sensors that monitor gross motor activity (actigraph units) continuously record the activity levels of a subject, producing massive amounts of high-resolution measurements. Analyzing actigraph data needs to account for spatial and temporal information on trajectories or paths traversed by subjects wearing such devices. Inferential objectives include estimating a subject's physical activity levels along a given trajectory; identifying trajectories that are more likely to produce higher levels of physical activity for a given subject; and predicting expected levels of physical activity in any proposed new trajectory for a given set of health attributes. Here, we devise a Bayesian hierarchical modeling framework for spatial-temporal actigraphy data to deliver fully model-based inference on trajectories while accounting for subject-level health attributes and spatial-temporal dependencies. We undertake a comprehensive analysis of an original dataset from the Physical Activity through Sustainable Transport Approaches in Los Angeles (PASTA-LA) study to ascertain spatial zones and trajectories exhibiting significantly higher levels of physical activity while accounting for various sources of heterogeneity.
- Published
- 2023
30. Numerical properties of solutions of LASSO regression.
- Author
-
Lakshmi, Mayur V. and Winkler, Joab R.
- Subjects
- *
CONSTRAINT satisfaction , *LEAST squares , *LINEAR systems , *EQUATIONS - Abstract
The determination of a concise model of a linear system when there are fewer samples m than predictors n requires the solution of the equation A x = b , where A ∈ R m × n and rank A = m , such that the selected solution from the infinite number of solutions is sparse, that is, many of its components are zero. This leads to the minimisation with respect to x of f (x , λ) = ‖ A x − b ‖ 2 2 + λ ‖ x ‖ 1 , where λ is the regularisation parameter. This problem, which is called LASSO regression, yields a family of functions x lasso (λ) and it is necessary to determine the optimal value of λ , that is, the value of λ that balances the fidelity of the model, ‖ A x lasso (λ) − b ‖ ≈ 0 , and the satisfaction of the constraint that x lasso (λ) be sparse. The aim of this paper is an investigation of the numerical properties of x lasso (λ) , and the main conclusion of this investigation is the incompatibility of sparsity and stability, that is, a sparse solution x lasso (λ) that preserves the fidelity of the model exists if the least squares (LS) solution x ls = A † b is unstable. Two methods, cross validation and the L-curve, for the computation of the optimal value of λ are compared and it is shown that the L-curve yields significantly better results. This difference between stable and unstable solutions x ls of the LS problem manifests itself in the very different forms of the L-curve for these two solutions. The paper includes examples of stable and unstable solutions x ls that demonstrate the theory. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
31. Optimal sensor placement for the spatial reconstruction of sound fields
- Author
-
Samuel A. Verburg, Filip Elvander, Toon van Waterschoot, and Efren Fernandez-Grande
- Subjects
Optimal sensor selection ,Sound field reconstruction ,Sparsity ,Compressive sensing ,Room impulse response ,Bayesian estimation ,Acoustics. Sound ,QC221-246 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract The estimation sound fields over space is of interest in sound field control and analysis, spatial audio, room acoustics and virtual reality. Sound fields can be estimated from a number of measurements distributed over space yet this remains a challenging problem due to the large experimental effort required. In this work we investigate sensor distributions that are optimal to estimate sound fields. Such optimization is valuable as it can greatly reduce the number of measurements required. The sensor positions are optimized with respect to the parameters describing a sound field, or the pressure reconstructed at the area of interest, by finding the positions that minimize the Bayesian Cramér-Rao bound (BCRB). The optimized distributions are investigated in a numerical study as well as with measured room impulse responses. We observe a reduction in the number of measurements of approximately 50% when the sensor positions are optimized for reconstructing the sound field when compared with random distributions. The results indicate that optimizing the sensors positions is also valuable when the vector of parameters is sparse, specially compared with random sensor distributions, which are often adopted in sparse array processing in acoustics.
- Published
- 2024
- Full Text
- View/download PDF
32. Penalized Mallow's model averaging.
- Author
-
Liu, Yifan
- Subjects
- *
MALVACEAE , *EMPIRICAL research , *ALGORITHMS - Abstract
This article proposes penalized Mallow's model averaging (pMMA) in the linear regression framework given non nested candidate models. Compared to the MMA, additional constraints are imposed on model weights. We introduce a general framework and allow for non convex constraints such as SCAD, MCP, and TLP. We establish the asymptotic optimality of our proposed penalized MMA (pMMA) estimator and show that the pMMA can achieve a higher sparsity level than the classic MMA. A coordinate-wise descent algorithm has been developed to compute the pMMA estimator efficiently. We conduct simulation and empirical studies to show that our pMMA estimator produces a more sparse weight vector than the MMA, but with better out-of-sample performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Jacobian sparsity detection using Bloom filters.
- Author
-
Hovland, Paul D.
- Subjects
- *
AUTOMATIC differentiation , *JACOBIAN matrices , *ALGORITHMS - Abstract
Determining Jacobian sparsity structure is an important step in the efficient computation of sparse Jacobians. We introduce a new method for determining Jacobian sparsity patterns by combining bit vector probing with Bloom filters. We further refine Bloom filter probing by combining it with hierarchical probing to yield a highly effective strategy for Jacobian sparsity pattern determination. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Efficient Modeling of Spatial Extremes over Large Geographical Domains.
- Author
-
Hazra, Arnab, Huser, Raphaël, and Bolin, David
- Abstract
AbstractVarious natural phenomena exhibit spatial extremal dependence at short spatial distances. However, existing models proposed in the spatial extremes literature often assume that extremal dependence persists across the entire domain. This is a strong limitation when modeling extremes over large geographical domains, and yet it has been mostly overlooked in the literature. We here develop a more realistic Bayesian framework based on a novel Gaussian scale mixture model, with the Gaussian process component defined though a stochastic partial differential equation yielding a sparse precision matrix, and the random scale component modeled as a low-rank Pareto-tailed or Weibull-tailed spatial process determined by compactly-supported basis functions. We show that our proposed model is approximately tail-stationary and that it can capture a wide range of extremal dependence structures. Its inherently sparse probabilistic structure allows fast Bayesian computations in high spatial dimensions based on a customized Markov chain Monte Carlo algorithm prioritizing calibration in the tail. We fit our model to analyze heavy monsoon rainfall data in Bangladesh. Our study shows that our model outperforms natural competitors and that it fits precipitation extremes well. We finally use the fitted model to draw inference on long-term return levels for marginal precipitation and spatial aggregates. Supplementary materials for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Design of a differentiable L-1 norm for pattern recognition and machine learning.
- Author
-
Zhang, Min, Wang, Yiming, Chen, Hongyu, Li, Taihao, Liu, Shupeng, Gu, Xianfeng, and Xu, Xiaoyin
- Subjects
- *
PATTERN recognition systems , *FEATURE selection , *MACHINE learning , *WORK design , *ALGORITHMS - Abstract
In various applications of pattern recognition, feature selection, and machine learning, L-1 norm is used as either an objective function or a regularizer. Mathematically, L-1 norm has unique characteristics that make it attractive in machine learning, feature selection, optimization, and regression. Computationally, however, L-1 norm presents a hurdle as it is non-differentiable, making the process of finding a solution difficult. Existing approach therefore relies on numerical approaches. In this work we designed an L-1 norm that is differentiable and, thus, has an analytical solution. The differentiable L-1 norm removes the absolute sign in the conventional definition and is everywhere differentiable. The new L-1 norm is almost everywhere linear, a desirable feature that is also present in the conventional L-1 norm. The only limitation of the new L-1 norm is that near zero, its behavior is not linear, hence we consider the new L-1 norm quasi-linear. Being differentiable, the new L-1 norm and its quasi-linear variation make them amenable to analytic solutions. Hence, it can facilitate the development and implementation of many algorithms involving L-1 norm. Our tests validate the capability of the new L-1 norm in various applications. • Designed an L-1 norm that is differentiable and has a high linearity. • The new L-1 norm does not involve the absolute sign required in conventional definition. • The new definition provides methods involving L-1 norm with an analytical solution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Wavelet Feature Screening.
- Author
-
Fonseca, Rodney, Morettin, Pedro, and Pinheiro, Aluísio
- Subjects
- *
FUNCTIONAL analysis , *REGRESSION analysis , *DATA analysis , *OZONE , *EPILEPSY - Abstract
An initial screening of which covariates are relevant is a common practice in high-dimensional regression models. The classic feature screening selects only a subset of covariates correlated with the response variable. However, many important features might have a relevant albeit highly nonlinear relation with the response. One screening approach that handles nonlinearity is to compute the correlation between the response and nonparametric functions of each covariate. Wavelets are powerful tools for nonparametric and functional data analysis but are still seldom used in the feature screening literature. We propose a wavelet feature screening method that can be easily implemented, and we prove that, under suitable conditions, it captures the true covariates with high probability. Simulation results also show that our approach outperforms other screening methods in highly nonlinear models. We apply feature screening to two datasets about ozone concentration and epilepsy. In both applications, the proposed method selects features that match findings in the literature of their respective research fields, illustrating the applicability of feature screening. for this article is available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Uygun maliyetli seyrekliğe duyarlı akustik geri besleme giderici.
- Author
-
Eren, Yusuf and Mengüç, Engin Cemal
- Abstract
One of the important problems encountered in acoustic feedback canceller (AFC) systems is the AF path has a sparse nature. This situation deteriorates the convergence rate and steady-state error of AFC systems. On the other hand, large-scale source signals such as speech/music and using high filter orders increase the computational cost of AFC systems. To this end, in this study, a cost-effective l0-norm-online censoring (OC)-least mean square (LMS) (l0-OC-LMS) based AFC system (l0-OC-AFC) is proposed, which solves the sparse problem of the AF path by processing only informative data instead of all the data. Thus, the proposed AFC system significantly contributes to reducing the computational complexity without sacrificing its performance. This is achieved by combining the OC strategy and the l0penalty norm promoting sparsity. The proposed l0-OC-AFC system is comprehensively tested in terms of the misalignment (MIS) and the added stable gain (ASG) on real-world long sparse AF paths measured from a behind-the-ear hearing aid. Simulation results reveal the effectiveness of the proposed l0-OC-AFC system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A Multiple Targets ISAR Imaging Method with Removal of Micro-Motion Connection Based on Joint Constraints.
- Author
-
Li, Hongxu, Guo, Qinglang, Xu, Zihan, Jin, Xinfei, Su, Fulin, and Li, Xiaodi
- Subjects
- *
INVERSE synthetic aperture radar , *RADAR antennas , *DIRECTIONAL antennas , *RIGID bodies , *PROBLEM solving - Abstract
Combining multiple data sources, Digital Earth is an integrated observation platform based on air–space–ground–sea monitoring systems. Among these data sources, the Inverse Synthetic Aperture Radar (ISAR) is a crucial observation method. ISAR is typically utilized to monitor both military and civilian ships due to its all-day and all-weather superiority. However, in complex scenarios, multiple targets may exist within the same radar antenna beam, resulting in severe defocusing due to different motion conditions. Therefore, this paper proposes a multiple-target ISAR imaging method with the removal of micro-motion connections based on the integration of joint constraints. The fully motion-compensated targets exhibit low rank and local similarity in the high-resolution range profile (HRRP) domain, while the micro-motion components possess sparsity. Additionally, targets display sparsity in the image domain. Inspired by this, we formulate a novel optimization by promoting the low-rank, the Laplacian, and the sparsity constraints of targets and the sparsity constraints of the micro-motion components. This optimization problem is solved by the linearized alternative direction method with adaptive penalty (LADMAP). Furthermore, the different motions of various targets degrade their inherent characteristics. Therefore, we integrate motion compensation transformation into the optimization, accordingly achieving the separation of rigid bodies and the micro-motion components of different targets. Experiments based on simulated data demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Level set topology optimization with sparse automatic differentiation.
- Author
-
Neofytou, Andreas, Rios, Thiago, Bujny, Mariusz, Menzel, Stefan, and Kim, H. Alicia
- Abstract
Analytical differentiation for a smooth and accurate sensitivity field is typically used for efficient structural and multidisciplinary optimization. However, it can be challenging for multiphysics and non-linear problems. An alternative approach is automatic differentiation (AD). For large problems with many design variables AD can be computationally expensive and memory demanding and thus its use is still limited. To address some of these challenges, we propose to exploit the sparsity of the level set topology optimization (LSTO) in combination with a hybrid mode when using AD. The modularized LSTO used here enables the use of AD in combination with the classical level set method. In our method, we utilize the operator overloading (OO) approach, and we start our development by comparing different OO libraries. Next, a sparse AD implementation is proposed to take advantage of the sparsity of the level set method, in which sensitivities are only required within a narrow band close to the level set boundary. The obtained results indicate that this sparsity can improve the efficiency of the implementation. However, the reduction in memory requirements is not as significant. To improve the memory consumption as well, we introduce a hybrid mode, where instead of computing the total sensitivity directly with OO, the expression is first derived analytically and then OO is used to obtain the partial derivatives. Our studies show that combining the hybrid mode with the sparsity of the LSTO results in improvements in efficiency, with almost an order of magnitude less computational time for the biggest mesh size studied. Finally, the combination of the forward and reverse modes depending on the partial derivatives at hand is exploited to further improve memory requirements and computational cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Optimality Conditions for Sparse Optimal Control of Viscous Cahn–Hilliard Systems with Logarithmic Potential.
- Author
-
Colli, Pierluigi, Sprekels, Jürgen, and Tröltzsch, Fredi
- Abstract
In this paper we study the optimal control of a parabolic initial-boundary value problem of viscous Cahn–Hilliard type with zero Neumann boundary conditions. Phase field systems of this type govern the evolution of diffusive phase transition processes with conserved order parameter. It is assumed that the nonlinear functions driving the physical processes within the spatial domain are double-well potentials of logarithmic type whose derivatives become singular at the boundary of their respective domains of definition. For such systems, optimal control problems have been studied in the past. We focus here on the situation when the cost functional of the optimal control problem contains a nondifferentiable term like the L 1 -norm, which leads to sparsity of optimal controls. For such cases, we establish first-order necessary and second-order sufficient optimality conditions for locally optimal controls. In the approach to second-order sufficient conditions, the main novelty of this paper, we adapt a technique introduced by Casas et al. in the paper (SIAM J Control Optim 53:2168–2202, 2015). In this paper, we show that this method can also be successfully applied to systems of viscous Cahn–Hilliard type with logarithmic nonlinearity. Since the Cahn–Hilliard system corresponds to a fourth-order partial differential equation in contrast to the second-order systems investigated before, additional technical difficulties have to be overcome. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Application of high performance computing and deep neural network learning in intelligent city mechanical product recommendation.
- Author
-
Zhang, Zijian
- Subjects
ARTIFICIAL neural networks ,INFORMATION technology ,HIGH performance computing ,DISCRIMINATION against overweight persons ,MATRIX decomposition - Abstract
Intelligent city is a product of the deep integration of information technology, industrialization and urbanization, which has a large number of intelligent mechanical products. The users widely evaluate their application characteristics, and the selection of mechanical products based on user evaluation has become a trend. Nowadays, personalized mechanical product recommendation based on user evaluation is more and more widely used. However, due to the sparse evaluation data, the recommendation accuracy needs to be improved. In this paper, the principle of matrix decomposition is deeply analyzed in order to provide useful ideas for solving this problem. The bias weight hybrid recommendation model of user preference and rating object characteristics is proposed, and the corresponding hybrid recommendation algorithm is designed. First, estimated data obtained using the matrix decomposition principle is supplemented to the sparse data matrix. Secondly, according to the characteristics of users and ratings, initial positions were set based on the statistical distribution of high-performance computing data, and bias weights were set by incorporating each feature. Finally, the nonlinear learning ability of deep neural network learning is used to enhance the classification effectiveness. Practice has proved that the constructed model is reasonable, the designed algorithm converges fast, the recommendation accuracy is improved by about 10%, and the model better alleviates the problem of sparse scoring data. The practical application is simple and convenient, and has good application value. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Underwater Small Target Classification Using Sparse Multi-View Discriminant Analysis and the Invariant Scattering Transform.
- Author
-
Christensen, Andrew, Sen Gupta, Ananya, and Kirsteins, Ivars
- Subjects
AUTOMATIC target recognition ,WAVELETS (Mathematics) ,SOUND wave scattering ,DISCRIMINANT analysis ,SUPPORT vector machines - Abstract
Sonar automatic target recognition (ATR) systems suffer from complex acoustic scattering, background clutter, and waveguide effects that are ever-present in the ocean. Traditional signal processing techniques often struggle to distinguish targets when noise and complicated target geometries are introduced. Recent advancements in machine learning and wavelet theory offer promising directions for extracting informative features from sonar return data. This work introduces a feature extraction and dimensionality reduction technique using the invariant scattering transform and Sparse Multi-view Discriminant Analysis for identifying highly informative features in the PONDEX09/PONDEX10 datasets. The extracted features are used to train a support vector machine classifier that achieves an average classification accuracy of 97.3% using six unique targets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Efficient federated learning for distributed neuroimaging data.
- Author
-
Thapaliya, Bishal, Ohib, Riyasat, Geenjaar, Eloy, Jingyu Liu, Calhoun, Vince, and Plis, Sergey M.
- Subjects
FEDERATED learning ,NEURAL development ,COGNITIVE development ,SCIENTIFIC community ,INFORMATION sharing - Abstract
Recent advancements in neuroimaging have led to greater data sharing among the scientific community. However, institutions frequently maintain control over their data, citing concerns related to research culture, privacy, and accountability. This creates a demand for innovative tools capable of analyzing amalgamated datasets without the need to transfer actual data between entities. To address this challenge, we propose a decentralized sparse federated learning (FL) strategy. This approach emphasizes local training of sparse models to facilitate efficient communication within such frameworks. By capitalizing on model sparsity and selectively sharing parameters between client sites during the training phase, our method significantly lowers communication overheads. This advantage becomes increasingly pronounced when dealing with larger models and accommodating the diverse resource capabilities of various sites. We demonstrate the effectiveness of our approach through the application to the Adolescent Brain Cognitive Development (ABCD) dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Adaptive analysis of the heteroscedastic multivariate regression modeling.
- Author
-
Dong, Xinyu, Lin, Ziyi, Cai, Ziyi, and Yang, Yuehan
- Subjects
- *
REGRESSION analysis , *HETEROSCEDASTICITY , *MULTIVARIATE analysis , *COMPUTER simulation , *NOISE - Abstract
Multivariate regression is a widely used technique for large-scale statistical applications. In this paper, we propose a novel method for the multivariate regression model, called multivariate weighted lasso (MWL). We consider the heteroscedasticity in the noise matrix and introduce an adjustable weight to calibrate the fitting residuals of different responses. The proposed method shows an advantage in complex multivariate regressions with heteroscedasticity. To implement the procedure, an efficient algorithm is provided based on the multivariate coordinate descent. We provide theoretical guarantees for the proposed method. Numerical simulations and financial applications are conducted using the proposed and other existing methods. The results indicate that the proposed method performs well in both variable selection and coefficient estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Simultaneous subgroup identification and variable selection for high dimensional data.
- Author
-
Yu, Huicong, Wu, Jiaqi, and Zhang, Weiping
- Subjects
- *
PRIOR learning , *REGRESSION analysis , *SUBGROUP analysis (Experimental design) , *HETEROGENEITY , *SAMPLING methods - Abstract
The high dimensionality of genetic data poses many challenges for subgroup identification, both computationally and theoretically. This paper proposes a double-penalized regression model for subgroup analysis and variable selection for heterogeneous high-dimensional data. The proposed approach can automatically identify the underlying subgroups, recover the sparsity, and simultaneously estimate all regression coefficients without prior knowledge of grouping structure or sparsity construction within variables. We optimize the objective function using the alternating direction method of multipliers with a proximal gradient algorithm and demonstrate the convergence of the proposed procedure. We show that the proposed estimator enjoys the oracle property. Simulation studies demonstrate the effectiveness of the novel method with finite samples, and a real data example is provided for illustration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Robust static hand gesture recognition: harnessing sparsity of deeply learned features.
- Author
-
Mohanty, Aparna, Roy, Kankana, and Sahay, Rajiv Ranjan
- Subjects
- *
DEEP learning , *NONVERBAL cues , *NONVERBAL communication , *GESTURE , *FACIAL expression , *CONVOLUTIONAL neural networks - Abstract
Apart from verbal communication among humans, non-verbal interactions also play a significant role in conveying meaningful information. Non-verbal cues mainly comprise gestures, body postures, and facial expressions. Hand gestures constitute the preferred mechanism for non-verbal communication, and today, they also find utility in human–computer interaction (HCI), gaming, virtual reality, robotics, sign language, etc. While extensive research has been conducted on utilizing deep learning for hand gesture recognition, there has been a notable scarcity of efforts focused on leveraging the sparse characteristics of deeply acquired features to distinguish hand postures, even in the presence of challenges such as varying hand sizes, diverse spatial positions within images, and background clutter. We demonstrate the effect of data augmentation, transfer learning, and sparsity on the performance of the proposed algorithm using publicly available hand gesture datasets. We also provide a quantitative comparative analysis of the proposed approach with state-of-the-art algorithms for static hand gesture recognition. We illustrate a noteworthy finding wherein dictionary learning through LC-KSVD, when applied to fine-tuned features extracted from a deep architecture, outperforms the results achieved by state-of-the-art architectures in the context of hand gesture classification. We have realized substantial enhancements with our proposed methodology when compared to a baseline convolutional model. For instance, in the case of the EgoGesture dataset, we attained an accuracy of 94.9 % , as opposed to the baseline accuracy of 63.3 % , through the utilization of sparsity in deep features. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Efficient Implementation of Multilayer Perceptrons: Reducing Execution Time and Memory Consumption.
- Author
-
Cedron, Francisco, Alvarez-Gonzalez, Sara, Ribas-Rodriguez, Ana, Rodriguez-Yañez, Santiago, and Porto-Pazos, Ana Belen
- Subjects
MULTILAYER perceptrons ,MEMORY ,NEURONS ,DENSITY - Abstract
A technique is presented that reduces the required memory of neural networks through improving weight storage. In contrast to traditional methods, which have an exponential memory overhead with the increase in network size, the proposed method stores only the number of connections between neurons. The proposed method is evaluated on feedforward networks and demonstrates memory saving capabilities of up to almost 80% while also being more efficient, especially with larger architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Compressed sensing: a discrete optimization approach.
- Author
-
Bertsimas, Dimitris and Johnson, Nicholas A. G.
- Subjects
SPARSE approximations ,COMPRESSED sensing ,DATA compression ,IMAGE reconstruction ,CLASSIFICATION algorithms ,SEMIDEFINITE programming - Abstract
We study the Compressed Sensing (CS) problem, which is the problem of finding the most sparse vector that satisfies a set of linear measurements up to some numerical tolerance. CS is a central problem in Statistics, Operations Research and Machine Learning which arises in applications such as signal processing, data compression, image reconstruction, and multi-label learning. We introduce an ℓ 2 regularized formulation of CS which we reformulate as a mixed integer second order cone program. We derive a second order cone relaxation of this problem and show that under mild conditions on the regularization parameter, the resulting relaxation is equivalent to the well studied basis pursuit denoising problem. We present a semidefinite relaxation that strengthens the second order cone relaxation and develop a custom branch-and-bound algorithm that leverages our second order cone relaxation to solve small-scale instances of CS to certifiable optimality. When compared against solutions produced by three state of the art benchmark methods on synthetic data, our numerical results show that our approach produces solutions that are on average 6.22 % more sparse. When compared only against the experiment-wise best performing benchmark method on synthetic data, our approach produces solutions that are on average 3.10 % more sparse. On real world ECG data, for a given ℓ 2 reconstruction error our approach produces solutions that are on average 9.95 % more sparse than benchmark methods ( 3.88 % more sparse if only compared against the best performing benchmark), while for a given sparsity level our approach produces solutions that have on average 10.77 % lower reconstruction error than benchmark methods ( 1.42 % lower error if only compared against the best performing benchmark). When used as a component of a multi-label classification algorithm, our approach achieves greater classification accuracy than benchmark compressed sensing methods. This improved accuracy comes at the cost of an increase in computation time by several orders of magnitude. Thus, for applications where runtime is not of critical importance, leveraging integer optimization can yield sparser and lower error solutions to CS than existing benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Communication-Efficient and Private Federated Learning with Adaptive Sparsity-Based Pruning on Edge Computing.
- Author
-
Song, Shijin, Du, Sen, Song, Yuefeng, and Zhu, Yongxin
- Subjects
FEDERATED learning ,SPARSE matrices ,DEEP learning ,RANDOM noise theory ,EDGE computing - Abstract
As data-driven deep learning (DL) has been applied in various scenarios, the privacy threats have become a widely recognized problem. To boost privacy protection in federated learning (FL), some methods adopt a one-shot differential privacy (DP) approach to obfuscate model updates, yet they do not take into account the dynamic balance between efficiency and privacy protection. To this end, we propose ASPFL—an efficient FL approach with adaptive sparsity-based pruning and differential privacy protection. We further propose the adaptive pruning mechanism by utilizing the Jensen-Shannon divergence as the metric to generate sparse matrices, which are then employed in the model updates. In addition, we introduce adaptive Gaussian noise by assessing the variation of sensitivity through post-pruning uploading. Extensive experiments validate that our proposed ASPFL boosts convergence speed by more than two times under non-IID data. Compared with existing DP-FL methods, ASPFL can maximally achieve over 82% accuracy on CIFAR-10, while the communication cost is greatly reduced by 40% under the same level of privacy protection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Enhancing quality and speed in database‐free neural network reconstructions of undersampled MRI with SCAMPI.
- Author
-
Siedler, Thomas M., Jakob, Peter M., and Herold, Volker
- Subjects
ARTIFICIAL neural networks ,MAGNETIC resonance imaging ,IMAGE reconstruction ,MAGNETICS ,SPEED - Abstract
Purpose: We present SCAMPI (Sparsity Constrained Application of deep Magnetic resonance Priors for Image reconstruction), an untrained deep Neural Network for MRI reconstruction without previous training on datasets. It expands the Deep Image Prior approach with a multidomain, sparsity‐enforcing loss function to achieve higher image quality at a faster convergence speed than previously reported methods. Methods: Two‐dimensional MRI data from the FastMRI dataset with Cartesian undersampling in phase‐encoding direction were reconstructed for different acceleration rates for single coil and multicoil data. Results: The performance of our architecture was compared to state‐of‐the‐art Compressed Sensing methods and ConvDecoder, another untrained Neural Network for two‐dimensional MRI reconstruction. SCAMPI outperforms these by better reducing undersampling artifacts and yielding lower error metrics in multicoil imaging. In comparison to ConvDecoder, the U‐Net architecture combined with an elaborated loss‐function allows for much faster convergence at higher image quality. SCAMPI can reconstruct multicoil data without explicit knowledge of coil sensitivity profiles. Moreover, it is a novel tool for reconstructing undersampled single coil k‐space data. Conclusion: Our approach avoids overfitting to dataset features, that can occur in Neural Networks trained on databases, because the network parameters are tuned only on the reconstruction data. It allows better results and faster reconstruction than the baseline untrained Neural Network approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.