17 results on '"Ambikapathi, ArulMurugan"'
Search Results
2. A deep learning framework for heart rate estimation from facial videos
- Author
-
Hsu, Gee-Sern Jison, Xie, Rui-Cang, Ambikapathi, ArulMurugan, and Chou, Kae-Jy
- Published
- 2020
- Full Text
- View/download PDF
3. Online RBM: Growing Restricted Boltzmann Machine on the fly for unsupervised representation
- Author
-
Savitha, Ramasamy, Ambikapathi, ArulMurugan, and Rajaraman, Kanagasabai
- Published
- 2020
- Full Text
- View/download PDF
4. Robust cross-pose face recognition using landmark oriented depth warping
- Author
-
Hsu, Gee-Sern (Jison), Ambikapathi, ArulMurugan, Chung, Sheng-Luen, and Shie, Hung-Cheng
- Published
- 2018
- Full Text
- View/download PDF
5. Convex-Optimization-Based Compartmental Pharmacokinetic Analysis for Prostate Tumor Characterization Using DCE-MRI.
- Author
-
Ambikapathi, ArulMurugan, Chan, Tsung-Han, Lin, Chia-Hsiang, Yang, Fei-Shih, Chi, Chong-Yung, and Wang, Yue
- Subjects
- *
PHARMACOKINETICS , *MAGNETIC resonance imaging , *PROSTATE tumors , *EXPONENTIAL decay law , *PROSTATE cancer - Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a powerful imaging modality to study the pharmacokinetics in a suspected cancer/tumor tissue. The pharmacokinetic (PK) analysis of prostate cancer includes the estimation of time activity curves (TACs), and thereby, the corresponding kinetic parameters (KPs), and plays a pivotal role in diagnosis and prognosis of prostate cancer. In this paper, we endeavor to develop a blind source separation algorithm, namely convex-optimization-based KPs estimation (COKE) algorithm for PK analysis based on compartmental modeling of DCE-MRI data, for effective prostate tumor detection and its quantification. The COKE algorithm first identifies the best three representative pixels in the DCE-MRI data, corresponding to the plasma, fast-flow, and slow-flow TACs, respectively. The estimation accuracy of the flux rate constants (FRCs) of the fast-flow and slow-flow TACs directly affects the estimation accuracy of the KPs that provide the cancer and normal tissue distribution maps in the prostate region. The COKE algorithm wisely exploits the matrix structure (Toeplitz, lower triangular, and exponential decay) of the original nonconvex FRCs estimation problem, and reformulates it into two convex optimization problems that can reliably estimate the FRCs. After estimation of the FRCs, the KPs can be effectively estimated by solving a pixel-wise constrained curve-fitting (convex) problem. Simulation results demonstrate the efficacy of the proposed COKE algorithm. The COKE algorithm is also evaluated with DCE-MRI data of four different patients with prostate cancer and the obtained results are consistent with clinical observations. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
6. Identifiability of the Simplex Volume Minimization Criterion for Blind Hyperspectral Unmixing: The No-Pure-Pixel Case.
- Author
-
Lin, Chia-Hsiang, Ma, Wing-Kin, Li, Wei-Chiang, Chi, Chong-Yung, and Ambikapathi, ArulMurugan
- Subjects
HYPERSPECTRAL imaging systems ,PIXELS ,ALGORITHMS ,COMPUTER simulation ,IMAGING systems - Abstract
In blind hyperspectral unmixing (HU), the pure-pixel assumption is well known to be powerful in enabling simple and effective blind HU solutions. However, the pure-pixel assumption is not always satisfied in an exact sense, especially for scenarios where pixels are heavily mixed. In the no-pure-pixel case, a good blind HU approach to consider is the minimum volume enclosing simplex (MVES). Empirical experience has suggested that MVES algorithms can perform well without pure pixels, although it was not totally clear why this is true from a theoretical viewpoint. This paper aims to address the latter issue. We develop an analysis framework wherein the perfect endmember identifiability of MVES is studied under the noiseless case. We prove that MVES is indeed robust against lack of pure pixels, as long as the pixels do not get too heavily mixed and too asymmetrically spread. The theoretical results are supported by numerical simulation results. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
7. On the endmember identifiability of Craig's criterion for hyperspectral unmixing: A statistical analysis for three-source case.
- Author
-
Lin, Chia-Hsiang, Ambikapathi, ArulMurugan, Li, Wei-Chiang, and Chi, Chong-Yung
- Published
- 2013
- Full Text
- View/download PDF
8. Outlier-robust dimension reduction and its impact on hyperspectral endmember extraction.
- Author
-
Huang, Hao-En, Chan, Tsung-Han, Ambikapathi, ArulMurugan, Ma, Wing-Kin, and Chi, Chong-Yung
- Published
- 2012
- Full Text
- View/download PDF
9. Convex geometry based estimation of number of endmembers in hyperspectral images.
- Author
-
Ambikapathi, ArulMurugan, Chan, Tsung-Han, and Chi, Chong-Yung
- Abstract
Hyperspectral unmixing is a process of decomposing the hyperspectral data cube into endmember signatures and their corresponding abundance maps. For the unmixing results to be completely interpretable, the number of materials (or endmembers) present in that area should be known a priori, which however is unknown in practice. In this work, we use hyperspectral data geometry and successive endmember estimation strategy of an endmember extraction algorithm (EEA) to develop two novel algorithms for estimating the number of endmembers, namely geometry based estimation of number of endmembers - convex hull (GENE-CH) algorithm and affine hull (GENE-AH) algorithm. The proposed GENE algorithms estimate the number of endmembers by using Neyman-Pearson hypothesis testing over the endmembers sequentially estimated by an EEA until the estimate of the number of endmembers is obtained. Monte- Carlo simulations demonstrate the efficacy of the proposed GENE algorithms, compared to some existing benchmark methods for estimating number of endmembers. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
10. Fast algorithms for robust hyperspectral endmember extraction based on worst-case simplex volume maximization.
- Author
-
Chan, Tsung-Han, Liou, Ji-Yuan, Ambikapathi, ArulMurugan, Ma, Wing-Kin, and Chi, Chong-Yung
- Abstract
Hyperspectral endmember extraction (EE) is to estimate endmember signatures (or material spectra) from the hyperspectral data of an unexplored area for analyzing the materials and their composition therein. However, the presence of noise in the data posts a serious problem for EE. Recently, robustness against noise has been taken into account in the design of EE algorithms. The robust maximum-volume simplex criterion [1] has been shown to yield performance improvement in the noisy scenario, but its real applicability is limited by its high implementation complexity. In this paper, we propose two fast algorithms to approximate this robust criterion [1], which turns out to deal with a set of partial max-min optimization problems in alternating manner and successive manner, respectively. Some Monte Carlo simulations demonstrate the superior computational efficiency and efficacy of the proposed robust algorithms in the noisy scenario over the robust algorithm in [1] and some benchmark EE algorithms. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
11. An nBSS algorithm for pharmacokinetic analysis of prostate cancer using DCE-MR images.
- Author
-
Ambikapathi, ArulMurugan, Chan, Tsung-Han, Keizer, Kannan, Yang, Fei-Shih, and Chi, Chong-Yung
- Abstract
Dynamic contrast enhanced magnetic resonance (DCE-MR) imaging is an exciting tool to study the pharmacokinetics of a suspected tumor tissue. Nonetheless, the inevitable partial volume effect in DCE-MR images may seriously hinder the quantitative analysis of the kinetic parameters. In this work, based on the conventional three-tissue compartment model, we propose an unsupervised nonnegative blind source separation (nBSS) algorithm, called time activity curve (TAC) estimation by projection (TACE-Pro), to dissect and characterize the composite signatures in DCE-MR images of patients with prostate cancers. The TACE-Pro algorithm first identifies the TACs (up to a scaling ambiguity) with theoretical support. Then the problem of scaling ambiguity and the estimation of kinetic parameters is handled by pharmacokinetic model fitting. Some Monte Carlo simulations and real DCE-MR image experiments of a patient with prostate cancer were performed to demonstrate the superior efficacy of the proposed TACE-Pro algorithm. Furthermore, the real data experiments revealed the consistency of the extracted information with the biopsy results. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
12. A Signal Processing Perspective on Hyperspectral Unmixing: Insights from Remote Sensing.
- Author
-
Ma, Wing-Kin, Bioucas-Dias, Jose M., Chan, Tsung-Han, Gillis, Nicolas, Gader, Paul, Plaza, Antonio J., Ambikapathi, ArulMurugan, and Chi, Chong-Yung
- Abstract
Blind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing (SP) for hyperspectral remote sensing [1], [2]. Blind HU aims at identifying materials present in a captured scene, as well as their compositions, by using high spectral resolution of hyperspectral images. It is a blind source separation (BSS) problem from a SP viewpoint. Research on this topic started in the 1990s in geoscience and remote sensing [3]?[7], enabled by technological advances in hyperspectral sensing at the time. In recent years, blind HU has attracted much interest from other fields such as SP, machine learning, and optimization, and the subsequent cross-disciplinary research activities have made blind HU a vibrant topic. The resulting impact is not just on remote sensing?blind HU has provided a unique problem scenario that inspired researchers from different fields to devise novel blind SP methods. In fact, one may say that blind HU has established a new branch of BSS approaches not seen in classical BSS studies. In particular, the convex geometry concepts?discovered by early remote sensing researchers through empirical observations [3]?[7] and refined by later research?are elegant and very different from statistical independence-based BSS approaches established in the SP field. Moreover, the latest research on blind HU is rapidly adopting advanced techniques, such as those in sparse SP and optimization. The present development of blind HU seems to be converging to a point where the lines between remote sensing-originated ideas and advanced SP and optimization concepts are no longer clear, and insights from both sides would be used to establish better methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
13. Robust Affine Set Fitting and Fast Simplex Volume Max-Min for Hyperspectral Endmember Extraction.
- Author
-
Chan, Tsung-Han, Ambikapathi, ArulMurugan, Ma, Wing-Kin, and Chi, Chong-Yung
- Subjects
- *
EXTRACTION (Chemistry) , *DATA corruption , *LEAST squares , *ALGORITHMS , *COMPUTER simulation - Abstract
Hyperspectral endmember extraction is to estimate endmember signatures (or material spectra) from the hyperspectral data of an area for analyzing the materials and their composition therein. The presence of noise and outliers in the data poses a serious problem in endmember extraction. In this paper, we handle the noise- and outlier-contaminated data by a two-step approach. We first propose a robust-affine-set-fitting algorithm for joint dimension reduction and outlier removal. The idea is to find a contamination-free data-representative affine set from the corrupted data, while keeping the effects of outliers minimum, in the least squares error sense. Then, we devise two computationally efficient algorithms for extracting endmembers from the outlier-removed data. The two algorithms are established from a simplex volume max-min formulation which is recently proposed to cope with noisy scenarios. A robust algorithm, called worst case alternating volume maximization (WAVMAX), has been previously developed for the simplex volume max-min formulation but is computationally expensive to use. The two new algorithms employ a different kind of decoupled max-min partial optimizations, wherein the design emphasis is on low-complexity implementations. Some computer simulations and real data experiments demonstrate the efficacy, the computational efficiency, and the applicability of the proposed algorithms, in comparison with the WAVMAX algorithm and some benchmark endmember extraction algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
14. Hyperspectral Data Geometry-Based Estimation of Number of Endmembers Using p-Norm-Based Pure Pixel Identification Algorithm.
- Author
-
Ambikapathi, ArulMurugan, Chan, Tsung-Han, Chi, Chong-Yung, and Keizer, Kannan
- Subjects
- *
HYPERSPECTRAL imaging systems , *ALGORITHMS , *LANDSCAPES , *MONTE Carlo method , *PIXELS - Abstract
Hyperspectral endmember extraction is a process to estimate endmember signatures from the hyperspectral observations, in an attempt to study the underlying mineral composition of a landscape. However, estimating the number of endmembers, which is usually assumed to be known a priori in most endmember estimation algorithms (EEAs), still remains a challenging task. In this paper, assuming hyperspectral linear mixing model, we propose a hyperspectral data geometry-based approach for estimating the number of endmembers by utilizing successive endmember estimation strategy of an EEA. The approach is fulfilled by two novel algorithms, namely geometry-based estimation of number of endmembers—convex hull (GENE-CH) algorithm and affine hull (GENE-AH) algorithm. The GENE-CH and GENE-AH algorithms are based on the fact that all the observed pixel vectors lie in the convex hull and affine hull of the endmember signatures, respectively. The proposed GENE algorithms estimate the number of endmembers by using the Neyman–Pearson hypothesis testing over the endmember estimates provided by a successive EEA until the estimate of the number of endmembers is obtained. Since the estimation accuracies of the proposed GENE algorithms depend on the performance of the EEA used, a reliable, reproducible, and successive EEA, called p-norm-based pure pixel identification (TRI-P) algorithm is then proposed. The performance of the proposed TRI-P algorithm, and the estimation accuracies of the GENE algorithms are demonstrated through Monte Carlo simulations. Finally, the proposed GENE and TRI-P algorithms are applied to real AVIRIS hyperspectral data obtained over the Cuprite mining site, Nevada, and some conclusions and future directions are provided. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
15. Chance-Constrained Robust Minimum-Volume Enclosing Simplex Algorithm for Hyperspectral Unmixing.
- Author
-
Ambikapathi, ArulMurugan, Chan, Tsung-Han, Ma, Wing-Kin, and Chi, Chong-Yung
- Subjects
- *
ROBUST optimization , *SIMPLEXES (Mathematics) , *ALGORITHMS , *SPECTRUM analysis , *IMAGE processing , *NOISE measurement - Abstract
Effective unmixing of hyperspectral data cube under a noisy scenario has been a challenging research problem in remote sensing arena. A branch of existing hyperspectral unmixing algorithms is based on Craig's criterion, which states that the vertices of the minimum-volume simplex enclosing the hyperspectral data should yield high fidelity estimates of the endmember signatures associated with the data cloud. Recently, we have developed a minimum-volume enclosing simplex (MVES) algorithm based on Craig's criterion and validated that the MVES algorithm is very useful to unmix highly mixed hyperspectral data. However, the presence of noise in the observations expands the actual data cloud, and as a consequence, the endmember estimates obtained by applying Craig-criterion-based algorithms to the noisy data may no longer be in close proximity to the true endmember signatures. In this paper, we propose a robust MVES (RMVES) algorithm that accounts for the noise effects in the observations by employing chance constraints. These chance constraints in turn control the volume of the resulting simplex. Under the Gaussian noise assumption, the chance-constrained MVES problem can be formulated into a deterministic nonlinear program. The problem can then be conveniently handled by alternating optimization, in which each subproblem involved is handled by using sequential quadratic programming solvers. The proposed RMVES is compared with several existing benchmark algorithms, including its predecessor, the MVES algorithm. Monte Carlo simulations and real hyperspectral data experiments are presented to demonstrate the efficacy of the proposed RMVES algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
16. A Simplex Volume Maximization Framework for Hyperspectral Endmember Extraction.
- Author
-
Chan, Tsung-Han, Ma, Wing-Kin, Ambikapathi, ArulMurugan, and Chi, Chong-Yung
- Subjects
COMBINATORIAL optimization ,SPECTRUM analysis ,IMAGE processing ,PIXELS ,ROBUST optimization ,SIMPLEXES (Mathematics) ,ALGORITHMS - Abstract
In the late 1990s, Winter proposed an endmember extraction belief that has much impact on endmember extraction techniques in hyperspectral remote sensing. The idea is to find a maximum-volume simplex whose vertices are drawn from the pixel vectors. Winter's belief has stimulated much interest, resulting in many different variations of pixel search algorithms, widely known as N-FINDR, being proposed. In this paper, we take a continuous optimization perspective to revisit Winter's belief, where the aim is to provide an alternative framework of formulating and understanding Winter's belief in a systematic manner. We first prove that, fundamentally, the existence of pure pixels is not only sufficient for the Winter problem to perfectly identify the ground-truth endmembers but also necessary. Then, under the umbrella of the Winter problem, we derive two methods using two different optimization strategies. One is by alternating optimization. The resulting algorithm turns out to be an N-FINDR variant, but, with the proposed formulation, we can pin down some of its convergence characteristics. Another is by successive optimization; interestingly, the resulting algorithm is found to exhibit some similarity to vertex component analysis. Hence, the framework provides linkage and alternative interpretations to these existing algorithms. Furthermore, we propose a robust worst case generalization of the Winter problem for accounting for perturbed pixel effects in the noisy scenario. An algorithm combining alternating optimization and projected subgradients is devised to deal with the problem. We use both simulations and real data experiments to demonstrate the viability and merits of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
17. Unsupervised Domain Adaptation via Domain-Adaptive Diffusion.
- Author
-
Peng D, Ke Q, Ambikapathi A, Yazici Y, Lei Y, and Liu J
- Abstract
Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain. Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task. However, using diffusion models to convert data distribution across different domains is a non-trivial problem as the standard diffusion models generally perform conversion from the Gaussian distribution instead of from a specific domain distribution. Besides, during the conversion, the semantics of the source-domain data needs to be preserved to classify correctly in the target domain. To tackle these problems, we propose a novel Domain-Adaptive Diffusion (DAD) module accompanied by a Mutual Learning Strategy (MLS), which can gradually convert data distribution from the source domain to the target domain while enabling the classification model to learn along the domain transition process. Consequently, our method successfully eases the challenge of UDA by decomposing the large domain gap into small ones and gradually enhancing the capacity of classification model to finally adapt to the target domain. Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.