132 results on '"graph-cuts"'
Search Results
2. Pavement Crack Detection Using Progressive Curvilinear Structure Anisotropy Filtering and Adaptive Graph-Cuts
- Author
-
Zhenhua Li, Guili Xu, Yuehua Cheng, Zhengsheng Wang, and Quan Wu
- Subjects
Crack detection ,curvilinear structure filtering ,phase congruency ,optimized segmentation ,graph-cuts ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Delineation of pavement cracks is essential for the damage assessment and maintenance of pavements. Existing methods are not sufficiently robust to interferences including varied illumination, non-uniform intensity, and complex texture noise. An integrated system for the automatic extraction of pavement cracks based on progressive curvilinear structure filtering and optimized segmentation techniques is proposed in this paper. Considering phase congruency and path morphological transformation, a phase congruency guided multi-scale path anisotropy filtering (PCmPA) method is first developed to generate a crack saliency map, significantly enhancing crack structures and eliminating isotropic texture noise. Phase congruency guided multi-scale free-form anisotropic filter (PCmFFA) is then presented as an extended curvilinear structure filter considering context information to enhance PCmPA. Finally, to accurately identify crack pixels and background, the two independent global filtering responses are incorporated with the phase congruency map and integrated into the graph-cuts based global optimization model with an adaptive regularization parameter. Experiments are conducted on two public pavement datasets and a self-captured laser-scanned pavement dataset, with results demonstrating that the proposed method can achieve superior performance compared to six existing algorithms.
- Published
- 2020
- Full Text
- View/download PDF
3. Adaptive Edge Preserving Maps in Markov Random Fields for Hyperspectral Image Classification.
- Author
-
Pan, Chao, Jia, Xiuping, Li, Jie, and Gao, Xinbo
- Subjects
- *
ENERGY function , *EDGES (Geometry) - Abstract
This article presents a novel adaptive edge preserving (aEP) scheme in Markov random fields (MRFs) for hyperspectral image (HSI) classification. MRF regularization usually suffered from over-smoothing at boundaries and insufficient refinement within class objects. This work divides and conquers this problem class-by-class, and integrates ${K}$ (${K} -1$)/2 (${K}$ is the class number) aEP maps (aEPMs) in MRF model. Spatial label dependence measure (SLDM) is designed to estimate the interpixel label dependence for given spectral similarity measure. For each class pair, aEPM is optimized by maximizing the difference between intraclass and interclass SLDM. Then, aEPMs are integrated with multilevel logistic (MLL) model to regularize the raw pixelwise labeling obtained by spectral and spectral–spatial methods, respectively. The graph-cuts-based $\alpha ~\beta $ -swap algorithm is modified to optimize the designed energy function. Moreover, to evaluate the final refined results at edges and small details thoroughly, segmentation evaluation metrics are introduced. Experiments conducted on real HSI data denote the superiority of aEPMs in evaluation metrics and region consistency, especially in detail preservation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Semi-automatic Segmentation of Scattered and Distributed Objects
- Author
-
Farid, Muhammad Shahid, Lucenteforte, Maurizio, Khan, Muhammad Hassan, Grangetto, Marco, Kacprzyk, Janusz, Series editor, Pal, Nikhil R., Advisory editor, Bello Perez, Rafael, Advisory editor, Corchado, Emilio S., Advisory editor, Hagras, Hani, Advisory editor, Kóczy, László T., Advisory editor, Kreinovich, Vladik, Advisory editor, Lin, Chin-Teng, Advisory editor, Lu, Jie, Advisory editor, Melin, Patricia, Advisory editor, Nedjah, Nadia, Advisory editor, Nguyen, Ngoc Thanh, Advisory editor, Wang, Jun, Advisory editor, Kurzynski, Marek, editor, Wozniak, Michal, editor, and Burduk, Robert, editor
- Published
- 2018
- Full Text
- View/download PDF
5. Improved body quantitative susceptibility mapping by using a variable‐layer single‐min‐cut graph‐cut for field‐mapping.
- Author
-
Boehm, Christof, Diefenbach, Maximilian N., Makowski, Marcus R., and Karampinos, Dimitrios C.
- Subjects
GRAPH algorithms ,LUMBAR vertebrae ,FAT ,COST functions ,BODIES of water - Abstract
Purpose: To develop a robust algorithm for field‐mapping in the presence of water–fat components, large B0 field inhomogeneities and MR signal voids and to apply the developed method in body applications of quantitative susceptibility mapping (QSM). Methods: A framework solving the cost‐function of the water–fat separation problem in a single‐min‐cut graph‐cut based on the variable‐layer graph construction concept was developed. The developed framework was applied to a numerical phantom enclosing an MR signal void, an air bubble experimental phantom, 14 large field of view (FOV) head/neck region in vivo scans and to 6 lumbar spine in vivo scans. Field‐mapping and subsequent QSM results using the proposed algorithm were compared to results using an iterative graph‐cut algorithm and a formerly proposed single‐min‐cut graph‐cut. Results: The proposed method was shown to yield accurate field‐map and susceptibility values in all simulation and in vivo datasets when compared to reference values (simulation) or literature values (in vivo). The proposed method showed improved field‐map and susceptibility results compared to iterative graph‐cut field‐mapping especially in regions with low SNR, strong field‐map variations and high R2∗ values. Conclusions: A single‐min‐cut graph‐cut field‐mapping method with a variable‐layer construction was developed for field‐mapping in body water–fat regions, improving quantitative susceptibility mapping particularly in areas close to MR signal voids. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Rapid Reconstruction of a Three-Dimensional Mesh Model Based on Oblique Images in the Internet of Things
- Author
-
Dongling Ma, Guangyun Li, and Li Wang
- Subjects
Internet of Things ,smart city ,oblique images ,mesh model ,3D reconstruction ,graph-cuts ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
One of the main targets of the Internet of Things (IoT) is the construction of smart cities, and many industries based on the IoT serve popular applications in a smart city. However, 3-D reconstruction constitutes a major difficulty in the construction of a smart city. In recent years, oblique photography technology has been widely applied in the rapid 3-D modeling and other aspects of smart cities. However, in the automatic construction of a 3-D mesh model for oblique photogrammetry, complex building geometries make it very difficult to construct a triangular mesh model. Therefore, a network construction method is needed that can not only effectively construct a 3-D mesh model but also address the results of auto-modeling for an oblique image. The representative network construction method is a huge triangulation network in which the constructed surface of the object does not satisfy the manifold features and it is inconvenient to optimize and edit the model, yielding a low network construction efficiency. To solve these problems, a new method for constructing a high-quality manifold mesh model is proposed in this paper. First, an adaptive octree division algorithm is used to divide the point cloud data into sub domains that cover each other. Then, a mesh reconstruction is performed in each sub domain, and an efficient mesh construction algorithm based on relabeling the vertices of the directed graph is proposed to construct the manifold mesh. Finally, a triangular facet orientation method is used to homogenize the normal vectors of the mesh. The experimental results proof that the proposed method greatly improves the mesh reconstruction, effectively reflects the model details, and possesses a strong anti-noise ability. Also, it has a good robustness and is particularly suitable for the 3-D reconstruction of large scenes and complex surfaces.
- Published
- 2018
- Full Text
- View/download PDF
7. Automatic Graph-Based Local Edge Detection
- Author
-
Lazarek, Jagoda, Szczepaniak, Piotr S., Kacprzyk, Janusz, Series editor, and Kowalczuk, Zdzisław, editor
- Published
- 2016
- Full Text
- View/download PDF
8. Graph Cut Based Segmentation of Predefined Shapes: Applications to Biological Imaging
- Author
-
Soubies, Emmanuel, Weiss, Pierre, Descombes, Xavier, Kacprzyk, Janusz, Series editor, Fred, Ana, editor, and De Marsico, Maria, editor
- Published
- 2015
- Full Text
- View/download PDF
9. A graph-cut approach for pulmonary artery-vein segmentation in noncontrast CT images.
- Author
-
Jimenez-Carretero, Daniel, Bermejo-Peláez, David, Nardelli, Pietro, Fraga, Patricia, Fraile, Eduardo, San José Estépar, Raúl, and Ledesma-Carbayo, Maria J
- Subjects
- *
DIAGNOSTIC imaging , *PULMONARY artery , *COMPUTED tomography , *RANDOM forest algorithms , *VASCULAR remodeling - Abstract
Highlights • An automatic method for pulmonary artery-vein (AV) segmentation in CT is proposed. • Vessel extraction is performed using scale-space particles. • Pre-classification with random forests (RF) defines AV similarity scores. • AV classification combines prior knowledge and connectivity using Graph-cuts (GC). • High accuracy is achieved on a set of clinical and synthetically generated CT cases. Graphical abstract Abstract Lung vessel segmentation has been widely explored by the biomedical image processing community; however, the differentiation of arterial from venous irrigation is still a challenge. Pulmonary artery–vein (AV) segmentation using computed tomography (CT) is growing in importance owing to its undeniable utility in multiple cardiopulmonary pathological states, especially those implying vascular remodelling, allowing the study of both flow systems separately. We present a new framework to approach the separation of tree-like structures using local information and a specifically designed graph-cut methodology that ensures connectivity as well as the spatial and directional consistency of the derived subtrees. This framework has been applied to the pulmonary AV classification using a random forest (RF) pre-classifier to exploit the local anatomical differences of arteries and veins. The evaluation of the system was performed using 192 bronchopulmonary segment phantoms, 48 anthropomorphic pulmonary CT phantoms, and 26 lungs from noncontrast CT images with precise voxel-based reference standards obtained by manually labelling the vessel trees. The experiments reveal a relevant improvement in the accuracy (∼ 20%) of the vessel particle classification with the proposed framework with respect to using only the pre-classification based on local information applied to the whole area of the lung under study. The results demonstrated the accurate differentiation between arteries and veins in both clinical and synthetic cases, specifically when the image quality can guarantee a good airway segmentation, which opens a huge range of possibilities in the clinical study of cardiopulmonary diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. ALS Point Cloud Classification by Integrating an Improved Fully Convolutional Network into Transfer Learning with Multi-Scale and Multi-View Deep Features
- Author
-
Xiangda Lei, Hongtao Wang, Cheng Wang, Zongze Zhao, Jianqi Miao, and Puguang Tian
- Subjects
ALS point cloud ,classification ,transfer learning ,fully convolutional neural network ,graph-cuts ,small training samples ,Chemical technology ,TP1-1185 - Abstract
Airborne laser scanning (ALS) point cloud has been widely used in various fields, for it can acquire three-dimensional data with a high accuracy on a large scale. However, due to the fact that ALS data are discretely, irregularly distributed and contain noise, it is still a challenge to accurately identify various typical surface objects from 3D point cloud. In recent years, many researchers proved better results in classifying 3D point cloud by using different deep learning methods. However, most of these methods require a large number of training samples and cannot be widely used in complex scenarios. In this paper, we propose an ALS point cloud classification method to integrate an improved fully convolutional network into transfer learning with multi-scale and multi-view deep features. First, the shallow features of the airborne laser scanning point cloud such as height, intensity and change of curvature are extracted to generate feature maps by multi-scale voxel and multi-view projection. Second, these feature maps are fed into the pre-trained DenseNet201 model to derive deep features, which are used as input for a fully convolutional neural network with convolutional and pooling layers. By using this network, the local and global features are integrated to classify the ALS point cloud. Finally, a graph-cuts algorithm considering context information is used to refine the classification results. We tested our method on the semantic 3D labeling dataset of the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that overall accuracy and the average F1 score obtained by the proposed method is 89.84% and 83.62%, respectively, when only 16,000 points of the original data are used for training.
- Published
- 2020
- Full Text
- View/download PDF
11. People Tracking Based on Predictions and Graph-Cuts Segmentation
- Author
-
Soudani, Amira, Zagrouba, Ezzeddine, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Bebis, George, editor, Boyle, Richard, editor, Parvin, Bahram, editor, Koracin, Darko, editor, Li, Baoxin, editor, Porikli, Fatih, editor, Zordan, Victor, editor, Klosowski, James, editor, Coquillart, Sabine, editor, Luo, Xun, editor, Chen, Min, editor, and Gotz, David, editor
- Published
- 2013
- Full Text
- View/download PDF
12. Infarct Segmentation of the Left Ventricle Using Graph-Cuts
- Author
-
Karim, Rashed, Chen, Zhong, Obom, Samantha, Ma, Ying-Liang, Acheampong, Prince, Gill, Harminder, Gill, Jaspal, Rinaldi, C. Aldo, O’Neill, Mark, Razavi, Reza, Schaeffter, Tobias, Rhode, Kawal S., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Camara, Oscar, editor, Mansi, Tommaso, editor, Pop, Mihaela, editor, Rhode, Kawal, editor, Sermesant, Maxime, editor, and Young, Alistair, editor
- Published
- 2013
- Full Text
- View/download PDF
13. Background Inpainting for Videos with Dynamic Objects and a Free-Moving Camera
- Author
-
Granados, Miguel, Kim, Kwang In, Tompkin, James, Kautz, Jan, Theobalt, Christian, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Fitzgibbon, Andrew, editor, Lazebnik, Svetlana, editor, Perona, Pietro, editor, Sato, Yoichi, editor, and Schmid, Cordelia, editor
- Published
- 2012
- Full Text
- View/download PDF
14. Validation of a Novel Method for the Automatic Segmentation of Left Atrial Scar from Delayed-Enhancement Magnetic Resonance
- Author
-
Karim, Rashed, Arujuna, Aruna, Brazier, Alex, Gill, Jaswinder, Rinaldi, C. Aldo, Cooklin, Michael, O’Neill, Mark, Razavi, Reza, Schaeffter, Tobias, Rueckert, Daniel, Rhode, Kawal S., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Camara, Oscar, editor, Konukoglu, Ender, editor, Pop, Mihaela, editor, Rhode, Kawal, editor, Sermesant, Maxime, editor, and Young, Alistair, editor
- Published
- 2012
- Full Text
- View/download PDF
15. Automatic Segmentation of Left Atrial Scar from Delayed-Enhancement Magnetic Resonance Imaging
- Author
-
Karim, Rashed, Arujuna, Aruna, Brazier, Alex, Gill, Jaswinder, Rinaldi, C. Aldo, O’Neill, Mark, Razavi, Reza, Schaeffter, Tobias, Rueckert, Daniel, Rhode, Kawal S., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Metaxas, Dimitris N., editor, and Axel, Leon, editor
- Published
- 2011
- Full Text
- View/download PDF
16. Segmentation and tracking of lung nodules via graph‐cuts incorporating shape prior and motion from 4D CT.
- Author
-
Cha, Jungwon, Farhangi, Mohammad Mehdi, Dunlap, Neal, and Amini, Amir A.
- Subjects
- *
PULMONARY nodules , *IMAGE segmentation , *LUNG cancer , *INTENSITY modulated radiotherapy , *CONE beam computed tomography - Abstract
Purpose: We have developed a robust tool for performing volumetric and temporal analysis of nodules from respiratory gated four‐dimensional (4D) CT. The method could prove useful in IMRT of lung cancer. Methods: We modified the conventional graph‐cuts method by adding an adaptive shape prior as well as motion information within a signed distance function representation to permit more accurate and automated segmentation and tracking of lung nodules in 4D CT data. Active shape models (ASM) with signed distance function were used to capture the shape prior information, preventing unwanted surrounding tissues from becoming part of the segmented object. The optical flow method was used to estimate the local motion and to extend three‐dimensional (3D) segmentation to 4D by warping a prior shape model through time. The algorithm has been applied to segmentation of well‐circumscribed, vascularized, and juxtapleural lung nodules from respiratory gated CT data. Results: In all cases, 4D segmentation and tracking for five phases of high‐resolution CT data took approximately 10 min on a PC workstation with AMD Phenom II and 32 GB of memory. The method was trained based on 500 breath‐held 3D CT data from the LIDC data base and was tested on 17 4D lung nodule CT datasets consisting of 85 volumetric frames. The validation tests resulted in an average Dice Similarity Coefficient (
DSC ) = 0.68 for all test data. An important by‐product of the method is quantitative volume measurement from 4D CT from end‐inspiration to end‐expiration which will also have important diagnostic value. Conclusion: The algorithm performs robust segmentation of lung nodules from 4D CT data. Signed distance ASM provides the shape prior information which based on the iterative graph‐cuts framework is adaptively refined to best fit the input data, preventing unwanted surrounding tissue from merging with the segmented object. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
17. A new primal-dual algorithm for multilabel graph-cuts problems with approximate moves.
- Author
-
Cheng, Ziang, Liu, Yang, and Liu, Guojun
- Subjects
GRAPH theory ,ENERGY conversion ,STOCHASTIC convergence ,GRAPHICS processing units ,PARAMETERIZATION - Abstract
Graph-cuts based move making algorithms have been intensively studied. Previous methods uniformly rely on max-flow/min-cut solutions for move-making, and have achieved generally good performance on a variety of applications. Early research suggests that path-augmenting algorithms such as BK tend to perform well on grid-structured graphs. Unlike conventional graph-cuts methods, our algorithm does not require the exact max-flow/min-cut solution for update. Instead, any cut/flow of a subproblem can be used for primal/dual update, which allows the max-flow solver to stop at any time during execution. Thanks to the dynamicity of our approach, the energy convergence rate can be improved by several times in our experiments on GPU. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. ShapeCut: Bayesian surface estimation using shape-driven graph.
- Author
-
Veni, Gopalkrishna, Elhabian, Shireen Y., and Whitaker, Ross T.
- Subjects
- *
IMAGE segmentation , *MAGNETIC resonance imaging , *HEART fibrosis , *GADOLINIUM , *ATRIAL fibrillation , *MATHEMATICAL optimization , *BAYESIAN analysis - Abstract
A variety of medical image segmentation problems present significant technical challenges, including heterogeneous pixel intensities, noisy/ill-defined boundaries and irregular shapes with high variability. The strategy of estimating optimal segmentations within a statistical framework that combines image data with priors on anatomical structures promises to address some of these technical challenges. However, methods that rely on local optimization techniques and/or local shape penalties (e.g., smoothness) have been proven to be inadequate for many difficult segmentation problems. These challenging segmentation problems can benefit from the inclusion of global shape priors within a maximum-a-posteriori estimation framework, which biases solutions toward an object class of interest. In this paper, we propose a maximum-a-posteriori formulation that relies on a generative image model by incorporating both local and global shape priors. The proposed method relies on graph cuts as well as a new shape parameters estimation that provides a global updates-based optimization strategy. We demonstrate our approach on synthetic datasets as well as on the left atrial wall segmentation from late-gadolinium enhancement MRI, which has been shown to be effective for identifying myocardial fibrosis in the diagnosis of atrial fibrillation. Experimental results prove the effectiveness of the proposed approach in terms of the average surface distance between extracted surfaces and the corresponding ground-truth, as well as the clinical efficacy of the method in the identification of fibrosis and scars in the atrial wall. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Multiscale Graph Cuts Based Method for Coronary Artery Segmentation in Angiograms.
- Author
-
Mabrouk, S., Oueslati, C., and Ghorbel, F.
- Subjects
CARDIOVASCULAR disease diagnosis ,CORONARY arteries ,ANGIOGRAPHY ,IMAGE segmentation ,ALGORITHMS ,ANATOMY - Abstract
Context X-ray angiography is the most used tool by clinician to diagnose the majority of cardiovascular disease and deformations in coronary arteries like stenosis. In most applications involving angiograms interpretation, accurate segmentation is essential to extract the coronary artery tree and thus speed up the medical intervention. Materials and Methods In this paper, we propose a multiscale algorithm based on Graph cuts for vessel extraction. The proposed method introduces the direction information into an adapted energy functional combining the vesselness measure, the geodesic path and the edgeness measure. The direction information allows to guide the segmentation along arteries structures and promote the extraction of relevant vessels. In the multiscale analysis, we study two scales adaptation (local and global). In the local approach, the image is divided into regions and scales are selected within a range including the smallest and largest vessel diameters in each region, while the global approach computes these diameters considering the whole image. Experiments are conducted on three datasets DS1, DS2 and DS3, having different characteristics and the proposed method is compared with four other methods namely fuzzy c-means clustering (FC), hysteresis thresholding (HT), region growing (RG) and accurate quantitative coronary artery segmentation (AQCA). Results Comparing the two proposed scale adaptation, results show that they give similar precision values on DS1 and DS2 and the local adaptation improve the precision on DS3. Standard quantitative measures were used for algorithms evaluation including Dice Similarity measure (DSM), sensitivity and precision. The proposed method outperforms the four considered methods in terms of DSM and sensitivity. The precision values of the proposed method are slightly lower than the AQCA but it remains higher than the three other methods. Conclusion The proposed method in this paper allows to automatically segment coronary arteries in angiography images. A multiscale approach is adopted to introduce the direction information in a graph cuts based method in order to guide this method to better detect curvilinear structures. Quantitative evaluation of the method shows promising segmentation results compared to some segmentation methods from the state-of-the-art. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. Rapid Texture Optimization of Three-Dimensional Urban Model Based on Oblique Images.
- Author
-
Weilong Zhang, Ming Li, Bingxuan Guo, Deren Li, and Ge Guo
- Subjects
- *
THREE-dimensional display systems , *ALGORITHMS , *MARKOV random fields , *GRAPHIC methods , *ENERGY function - Abstract
Seamless texture mapping is one of the key technologies for photorealistic 3D texture reconstruction. In this paper, a method of rapid texture optimization of 3D urban reconstruction based on oblique images is proposed aiming at the existence of texture fragments, seams, and inconsistency of color in urban 3D texture mapping based on low-altitude oblique images. First, we explore implementing radiation correction on the experimental images with a radiation procession algorithm. Then, an efficient occlusion detection algorithm based on OpenGL is proposed according to the mapping relation between the terrain triangular mesh surface and the images to implement the occlusion detection of the visible texture on the triangular facets as well as create a list of visible images. Finally, a texture clustering algorithm is put forward based on Markov Random Field utilizing the inherent attributes of the images and solve the energy function minimization by Graph-Cuts. The experimental results display that the method is capable of decreasing the existence of texture fragments, seams, and inconsistency of color in the 3D texture model reconstruction. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
21. Pavement Crack Detection Using Progressive Curvilinear Structure Anisotropy Filtering and Adaptive Graph-Cuts
- Author
-
Yuehua Cheng, Guili Xu, Zhengsheng Wang, Quan Wu, and Zhenhua Li
- Subjects
General Computer Science ,Computer science ,Context (language use) ,02 engineering and technology ,Phase congruency ,optimized segmentation ,Cut ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Anisotropic filtering ,050210 logistics & transportation ,Pixel ,05 social sciences ,Isotropy ,General Engineering ,Filter (signal processing) ,phase congruency ,Noise ,Crack detection ,curvilinear structure filtering ,graph-cuts ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:TK1-9971 ,Algorithm - Abstract
Delineation of pavement cracks is essential for the damage assessment and maintenance of pavements. Existing methods are not sufficiently robust to interferences including varied illumination, non-uniform intensity, and complex texture noise. An integrated system for the automatic extraction of pavement cracks based on progressive curvilinear structure filtering and optimized segmentation techniques is proposed in this paper. Considering phase congruency and path morphological transformation, a phase congruency guided multi-scale path anisotropy filtering (PCmPA) method is first developed to generate a crack saliency map, significantly enhancing crack structures and eliminating isotropic texture noise. Phase congruency guided multi-scale free-form anisotropic filter (PCmFFA) is then presented as an extended curvilinear structure filter considering context information to enhance PCmPA. Finally, to accurately identify crack pixels and background, the two independent global filtering responses are incorporated with the phase congruency map and integrated into the graph-cuts based global optimization model with an adaptive regularization parameter. Experiments are conducted on two public pavement datasets and a self-captured laser-scanned pavement dataset, with results demonstrating that the proposed method can achieve superior performance compared to six existing algorithms.
- Published
- 2020
- Full Text
- View/download PDF
22. REGION MERGING VIA GRAPH-CUTS
- Author
-
Jean Stawiaski and Etienne Decenciére
- Subjects
graph-cuts ,region merging ,watershed transform ,Medicine (General) ,R5-920 ,Mathematics ,QA1-939 - Abstract
In this paper, we discuss the use of graph-cuts to merge the regions of the watershed transform optimally. Watershed is a simple, intuitive and efficient way of segmenting an image. Unfortunately it presents a few limitations such as over-segmentation and poor detection of low boundaries. Our segmentation process merges regions of the watershed over-segmentation by minimizing a specific criterion using graph-cuts optimization. Two methods will be introduced in this paper. The first is based on regions histogram and dissimilarity measures between adjacent regions. The second method deals with efficient approximation of minimal surfaces and geodesics. Experimental results show that these techniques can efficiently be used for large images segmentation when a pre-computed low level segmentation is available. We will present these methods in the context of interactive medical image segmentation.
- Published
- 2011
- Full Text
- View/download PDF
23. Field Map Reconstruction in Magnetic Resonance Imaging Using Bayesian Estimation
- Author
-
Fabio Baselice, Giampaolo Ferraioli, and Aymen Shabou
- Subjects
Magnetic Resonance Imaging ,field map estimation ,phase unwrapping ,bayesian estimation ,graph-cuts ,Markov Random Field ,Chemical technology ,TP1-1185 - Abstract
Field inhomogeneities in Magnetic Resonance Imaging (MRI) can cause blur or image distortion as they produce off-resonance frequency at each voxel. These effects can be corrected if an accurate field map is available. Field maps can be estimated starting from the phase of multiple complex MRI data sets. In this paper we present a technique based on statistical estimation in order to reconstruct a field map exploiting two or more scans. The proposed approach implements a Bayesian estimator in conjunction with the Graph Cuts optimization method. The effectiveness of the method has been proven on simulated and real data.
- Published
- 2009
- Full Text
- View/download PDF
24. Lung diaphragm tracking in CBCT images using spatio-temporal MRF.
- Author
-
Sundarapandian, Manivannan, Kalpathi, Ramakrishnan, Siochi, R. Alfredo C., and Kadam, Amrut S.
- Subjects
- *
DIAPHRAGM radiography , *LUNG radiography , *CONE beam computed tomography , *MARKOV random fields , *HOUGH functions , *BIOMARKERS - Abstract
In EBRT in order to monitor the intra fraction motion of thoracic and abdominal tumors, one of the standard approaches is to use the lung diaphragm apex as an internal marker. However, tracking the position of the apex from image based observations is a challenging problem, as it undergoes both position and shape variation. The purpose of this paper is to propose an alternative method for tracking the ipsi-lateral hemidiaphragm apex (IHDA) position on Cone Beam Computed Tomography (CBCT) projection images. A hierarchical method is proposed to track the IHDA position across the frames. The diaphragm state is modeled as a spatio-temporal Markov Random Field (MRF). The likelihood function is derived from the votes based on 4D-Hough space. The optimal state of the diaphragm is obtained by solving the associated energy minimization problem using graph-cuts. A heterogeneous GPU implementation is provided for the method using CUDA framework and the performance is compared with that of CPU implementation. The method was tested using 15 clinical CBCT images. The results demonstrate that the MRF formulation outperforms the full search method in terms of accuracy. The GPU based heterogeneous implementation of the proposed algorithm takes about 25 s, which is 16% improvement over the existing benchmark. The proposed MRF formulation considers all the possible combinations from the 4D-Hough space and therefore results in better tracking accuracy. The GPU based implementation exploits the inherent parallelism in our algorithm to accelerate the performance thereby increasing the viability of the approach for clinical use. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
25. A MIN-CUT BASED FILTER FOR AIRBORNE LIDAR DATA.
- Author
-
Ural, Serkan and Shan, Jie
- Subjects
LIDAR ,DATA extraction ,ACQUISITION of data - Abstract
LiDAR (Light Detection and Ranging) is a routinely employed technology as a 3-D data collection technique for topographic mapping. Conventional workflows for analyzing LiDAR data require the ground to be determined prior to extracting other features of interest. Filtering the terrain points is one of the fundamental processes to acquire higher-level information from unstructured LiDAR point data. There are many ground-filtering algorithms in literature, spanning several broad categories regarding their strategies. Most of the earlier algorithms examine only the local characteristics of the points or grids, such as the slope, and elevation discontinuities. Since considering only the local properties restricts the filtering performance due to the complexity of the terrain and the features, some recent methods utilize global properties of the terrain as well. This paper presents a new ground filtering method, Min-cut Based Filtering (MBF), which takes both local and global properties of the points into account. MBF considers ground filtering as a labeling task. First, an energy function is designed on a graph, where LiDAR points are considered as the nodes on the graph that are connected to each other as well as to two auxiliary nodes representing ground and off-ground labels. The graph is constructed such that the data costs are assigned to the edges connecting the points to the auxiliary nodes, and the smoothness costs to the edges between points. Data and smoothness terms of the energy function are formulated using point elevations and approximate ground information. The data term conducts the likelihood of the points being ground or off-ground while the smoothness term enforces spatial coherence between neighboring points. The energy function is optimized by finding the minimumcut on the graph via the alpha-expansion algorithm. The resulting graph-cut provides the labeling of the point cloud as ground and off-ground points. Evaluation of the proposed method on the ISPRS test dataset for ground filtering demonstrates that the results are comparable with most current existing methods. An overall average filtering accuracy for the 15 ISPRS test areas is 91.3%. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. White Matter MS-Lesion Segmentation <?Pub _newline ?>Using a Geometric Brain Model.
- Author
-
Strumia, Maddalena, Schmidt, Frank R., Anastasopoulos, Constantinos, Granziera, Cristina, Krueger, Gunnar, and Brox, Thomas
- Subjects
- *
MAGNETIC resonance imaging of the brain , *MULTIPLE sclerosis diagnosis , *IMAGE segmentation , *BRAIN abnormalities , *WHITE matter (Nerve tissue) , *IMAGE quality analysis - Abstract
Brain magnetic resonance imaging (MRI) in patients with Multiple Sclerosis (MS) shows regions of signal abnormalities, named plaques or lesions. The spatial lesion distribution plays a major role for MS diagnosis. In this paper we present a 3D MS-lesion segmentation method based on an adaptive geometric brain model. We model the topological properties of the lesions and brain tissues in order to constrain the lesion segmentation to the white matter. As a result, the method is independent of an MRI atlas. We tested our method on the MICCAI MS grand challenge proposed in 2008 and achieved competitive results. In addition, we used an in-house dataset of 15 MS patients, for which we achieved best results in most distances in comparison to atlas based methods. Besides classical segmentation distances, we motivate and formulate a new distance to evaluate the quality of the lesion segmentation, while being robust with respect to minor inconsistencies at the boundary level of the ground truth annotation. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
27. Automated Analysis of Hip Joint Cartilage Combining MR T2 and Three-Dimensional Fast-Spin-Echo Images.
- Author
-
Chandra, Shekhar S., Surowiec, Rachel, Ho, Charles, Xia, Ying, Engstrom, Craig, Crozier, Stuart, and Fripp, Jurgen
- Abstract
Purpose: To validate a fully automated scheme to extract biochemical information from the hip joint cartilages using MR T2 mapping images incorporating segmentation of co-registered three-dimensional Fast-Spin-Echo (3D-SPACE) images. Methods: Manual analyses of unilateral hip (3 Tesla) MR images of 24 asymptomatic volunteers were used to validate a 3D deformable model method for automated cartilage segmentation of SPACE scans, partitioning of the individual femoral and acetabular cartilage plates into clinically defined subregions and propagating these results to T2 maps to calculate region-wise T2 value statistics. Analyses were completed on a desktop computer (~10 min per case). Results: The mean voxel overlap between automated A and manual M segmentations of the cartilage volumes in the (clinically based) SPACE images was 73% (100 ×2∣A ∩ M∣/[∣A∣+∣M∣]). The automated and manual analyses demonstrated a relative difference error <10% in the median "T2 average signal" for each cartilage plate. The automated and manual analyses showed consistent patterns between significant differences in T2 data across the hip cartilage sub-regions. Conclusion: The good agreement between the manual and automatic analyses of T2 values indicates the use of structural 3D-SPACE MR images with the proposed method provides a promising approach for automated quantitative T2 assessment of hip joint cartilages. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. EXTRACTING MOBILE OBJECTS IN IMAGES USING A VELODYNE LIDAR POINT CLOUD.
- Author
-
Brédif, Mathieu, Vallet, Bruno, and Wen Xiao
- Subjects
OPTICAL radar ,DEMPSTER-Shafer theory ,IMAGE segmentation - Abstract
This paper presents a full pipeline to extract mobile objects in images based on a simultaneous laser acquisition with a Velodyne scanner. The point cloud is first analysed to extract mobile objects in 3D. This is done using Dempster-Shafer theory and it results in weights telling for each points if it corresponds to a mobile object, a fixed object or if no decision can be made based on the data (unknown). These weights are projected in an image acquired simultaneously and used to segment the image between the mobile and the static part of the scene. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
29. HuPBA8k+: Dataset and ECOC-Graph-Cut based segmentation of human limbs.
- Author
-
Sánchez, Daniel, Ángel Bautista, Miguel, and Escalera, Sergio
- Subjects
- *
GRAPH theory , *IMAGE segmentation , *ERROR-correcting codes , *HUMAN-computer interaction , *COMPLETENESS theorem - Abstract
Human multi-limb segmentation in RGB images has attracted a lot of interest in the research community because of the huge amount of possible applications in fields like Human–Computer Interaction, Surveillance, eHealth, or Gaming. Nevertheless, human multi-limb segmentation is a very hard task because of the changes in appearance produced by different points of view, clothing, lighting conditions, occlusions, and number of articulations of the human body. Furthermore, this huge pose variability makes the availability of large annotated datasets difficult. In this paper, we introduce the HuPBA 8 k + dataset. The dataset contains more than 8000 labeled frames at pixel precision, including more than 120 000 manually labeled samples of 14 different limbs. For completeness, the dataset is also labeled at frame-level with action annotations drawn from an 11 action dictionary which includes both single person actions and person–person interactive actions. Furthermore, we also propose a two-stage approach for the segmentation of human limbs. In the first stage, human limbs are trained using cascades of classifiers to be split in a tree-structure way, which is included in an Error-Correcting Output Codes (ECOC) framework to define a body-like probability map. This map is used to obtain a binary mask of the subject by means of GMM color modelling and Graph-Cuts theory. In the second stage, we embed a similar tree-structure in an ECOC framework to build a more accurate set of limb-like probability maps within the segmented user mask that are fed to a multi-label Graph-Cut procedure to obtain final multi-limb segmentation. The methodology is tested on the novel HuPBA 8 k + dataset, showing performance improvements in comparison to state-of-the-art approaches. In addition, a baseline of standard action recognition methods for the 11 actions categories of the novel dataset is also provided. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
30. A scalable approach to T2-MRI colon segmentation
- Author
-
Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Universitat Politècnica de Catalunya. ViRVIG - Grup de Recerca en Visualització, Realitat Virtual i Interacció Gràfica, Orellana Bech, Bernat, Monclús Lahoya, Eva, Navazo Álvaro, Isabel, Brunet Crosa, Pere, Bendezú García, Álvaro, Azpiroz Vidaur, Fernando, Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Universitat Politècnica de Catalunya. ViRVIG - Grup de Recerca en Visualització, Realitat Virtual i Interacció Gràfica, Orellana Bech, Bernat, Monclús Lahoya, Eva, Navazo Álvaro, Isabel, Brunet Crosa, Pere, Bendezú García, Álvaro, and Azpiroz Vidaur, Fernando
- Abstract
The study of the colonic volume is a procedure with strong relevance to gastroenterologists. Depending on the clinical protocols, the volume analysis has to be performed on MRI of the unprepared colon without contrast administration. In such circumstances, existing measurement procedures are cumbersome and time-consuming for the specialists. The algorithm presented in this paper permits a quasi-automatic segmentation of the unprepared colon on T2-weighted MRI scans. The segmentation algorithm is organized as a three-stage pipeline. In the first stage, a custom tubularity filter is run to detect colon candidate areas. The specialists provide a list of points along the colon trajectory, which are combined with tubularity information to calculate an estimation of the colon medial path. In the second stage, we delimit the region of interest by applying custom segmentation algorithms to detect colon neighboring regions and the fat capsule containing abdominal organs. Finally, within the reduced search space, segmentation is performed via 3D graph-cuts in a three-stage multigrid approach. Our algorithm was tested on MRI abdominal scans, including different acquisition resolutions, and its results were compared to the colon ground truth segmentations provided by the specialists. The experiments proved the accuracy, efficiency, and usability of the algorithm, while the variability of the scan resolutions contributed to demonstrate the computational scalability of the multigrid architecture. The system is fully applicable to the colon measurement clinical routine, being a substantial step towards a fully automated segmentation., Postprint (author's final draft)
- Published
- 2020
31. Object co-segmentation based on directed graph clustering.
- Author
-
Meng, Fanman, Luo, Bing, and Huang, Chao
- Abstract
In this paper, we develop a new algorithm to segment multiple common objects from a group of images. Our method consists of two aspects: directed graph clustering and prior propagation. The clustering is used to cluster the local regions of the original images and generate the foreground priors from these clusterings. The second step propagates the prior of each class and locates the common objects from the images in terms of foreground map. Finally, we use the foreground map as the unary term of Markov random field segmentation and segment the common objects by graph-cuts algorithm. We test our method on FlickrMFC and ICoseg datasets. The experimental results show that the proposed method can achieve larger accuracy compared with several state-of-arts co-segmentation methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
32. Space-Time Joint Multi-layer Segmentation and Depth Estimation.
- Author
-
Guillemaut, Jean-Yves and Hilton, Adrian
- Abstract
Video-based segmentation and reconstruction techniques are predominantly extensions of techniques developed for the image domain treating each frame independently. These approaches ignore the temporal information contained in input videos which can lead to incoherent results. We propose a framework for joint segmentation and reconstruction which explicitly enforces temporal consistency by formulating the problem as an energy minimisation generalised to groups of frames. The main idea is to use optical flow in combination with a confidence measure to impose robust temporal smoothness constraints. Optimisation is performed using recent advances in the field of graph-cuts combined with practical considerations to reduce run-time and memory consumption. Experimental results with real sequences containing rapid motion demonstrate that the method is able to improve spatio-temporal coherence both in terms of segmentation and reconstruction without introducing any degradation in regions where optical flow fails due to fast motion. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
33. Automatic differential segmentation of the prostate in 3-D MRI using Random Forest classification and graph-cuts optimization.
- Author
-
Moschidis, Emmanouil and Graham, Jim
- Abstract
In this paper we address the problem of automated differential segmentation of the prostate in three dimensional (3-D) magnetic resonance images (MRI) of patients with benign prostatic hyperplasia (BPH). We suggest a framework that consists of two stages: in the first stage, a Random Forest classifier localizes the anatomy of interest. In the second stage, Graph-Cuts (GC) optimization is utilized for obtaining the final delineation. GC optimization regularizes the hypotheses produced by the classification scheme by imposing contextual constraints via a Markov Random Field model. Our method obtains comparable or better results in a fully automated fashion compared with a previous semi-automatic technique [6]. It also performs well, when small training sets are used. This is particularly useful in on-line interactive segmentation systems, where prior knowledge is limited, or in automated approaches that generate ground truth used for model-building. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
34. Hyperthermia critical tissues automatic segmentation of head and neck CT images using atlas registration and graph cuts.
- Author
-
Fortunati, V., Verhaart, R. F., van der Lijn, F., Niessen, W. J., Veenland, J. F., Paulides, M. M., and van Walsum, T.
- Abstract
Outcome optimization of hyperthermia tumor treatment in the head and neck requires accurate hyperthermia treatment planning. Hyperthermia treatment planning is based on tissue segmentation for 3D patient model generation. We present here an automatic atlas-based segmentation algorithm for the organs at risk from CT images of the head and neck. To overcome the large anatomical variability, atlas registration and intensity-based classification were combined. A cost function composed of an intensity energy term, a spatial prior energy term based on the atlas registration and a regularization term is globally minimized using graph cut. The method was evaluated by measuring Dice similarity coefficient, mean and Hausdorff surface distances with respect to manual delineation. Overall a high correspondence was found with Dice similarity coefficient higher than 0.86 and a mean distance lower than the voxel resolution. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
35. ALS Point Cloud Classification by Integrating an Improved Fully Convolutional Network into Transfer Learning with Multi-Scale and Multi-View Deep Features
- Author
-
Jianqi Miao, Zongze Zhao, Xiangda Lei, Puguang Tian, Hongtao Wang, and Cheng Wang
- Subjects
fully convolutional neural network ,010504 meteorology & atmospheric sciences ,Computer science ,0211 other engineering and technologies ,Point cloud ,Context (language use) ,02 engineering and technology ,transfer learning ,lcsh:Chemical technology ,Curvature ,01 natural sciences ,Biochemistry ,Convolutional neural network ,Article ,Analytical Chemistry ,lcsh:TP1-1185 ,small training samples ,Electrical and Electronic Engineering ,Projection (set theory) ,Instrumentation ,ALS point cloud ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,business.industry ,Deep learning ,Pattern recognition ,Atomic and Molecular Physics, and Optics ,Photogrammetry ,classification ,Feature (computer vision) ,graph-cuts ,Artificial intelligence ,business ,F1 score - Abstract
Airborne laser scanning (ALS) point cloud has been widely used in various fields, for it can acquire three-dimensional data with a high accuracy on a large scale. However, due to the fact that ALS data are discretely, irregularly distributed and contain noise, it is still a challenge to accurately identify various typical surface objects from 3D point cloud. In recent years, many researchers proved better results in classifying 3D point cloud by using different deep learning methods. However, most of these methods require a large number of training samples and cannot be widely used in complex scenarios. In this paper, we propose an ALS point cloud classification method to integrate an improved fully convolutional network into transfer learning with multi-scale and multi-view deep features. First, the shallow features of the airborne laser scanning point cloud such as height, intensity and change of curvature are extracted to generate feature maps by multi-scale voxel and multi-view projection. Second, these feature maps are fed into the pre-trained DenseNet201 model to derive deep features, which are used as input for a fully convolutional neural network with convolutional and pooling layers. By using this network, the local and global features are integrated to classify the ALS point cloud. Finally, a graph-cuts algorithm considering context information is used to refine the classification results. We tested our method on the semantic 3D labeling dataset of the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that overall accuracy and the average F1 score obtained by the proposed method is 89.84% and 83.62%, respectively, when only 16,000 points of the original data are used for training.
- Published
- 2020
- Full Text
- View/download PDF
36. Tensor-Cuts: A simultaneous multi-type feature extractor and classifier and its application to road extraction from satellite images.
- Author
-
Poullis, Charalambos
- Subjects
- *
FEATURE extraction , *REMOTE-sensing images , *ALGORITHMS , *GABOR filters , *DATA analysis , *GLOBAL optimization - Abstract
Many different algorithms have been proposed for the extraction of features with a range of applications. In this work, we present Tensor-Cuts: a novel framework for feature extraction and classification from images which results in the simultaneous extraction and classification of multiple feature types (surfaces, curves and joints). The proposed framework combines the strengths of tensor encoding, feature extraction using Gabor Jets, global optimization using Graph-Cuts, and is unsupervised and requires no thresholds. We present the application of the proposed framework in the context of road extraction from satellite images, since its characteristics makes it an ideal candidate for use in remote sensing applications where the input data varies widely. We have extensively tested the proposed framework and present the results of its application to road extraction from satellite images. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
37. EXTRACTING MOBILE OBJECTS IN IMAGES USING A VELODYNE LIDAR POINT CLOUD.
- Author
-
Vallet, Bruno, Wen Xiao, and Brédif, Mathieu
- Subjects
DEMPSTER-Shafer theory ,OPTICAL scanners ,PROBABILITY theory - Abstract
This paper presents a full pipeline to extract mobile objects in images based on a simultaneous laser acquisition with a Velodyne scanner. The point cloud is first analysed to extract mobile objects in 3D. This is done using Dempster-Shafer theory and it results in weights telling for each points if it corresponds to a mobile object, a fixed object or if no decision can be made based on the data (unknown). These weights are projected in an image acquired simultaneously and used to segment the image between the mobile and the static part of the scene. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
38. PHOTOGRAMMETRIC DSM DENOISING.
- Author
-
Nexa, F. and Gerkeb, M.
- Subjects
PHOTOGRAMMETRY ,IMAGE registration ,LIDAR ,DIGITAL elevation models ,SIGNAL denoising ,RANDOM noise theory - Abstract
Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
39. MBIS: Multivariate Bayesian Image Segmentation tool.
- Author
-
Esteban, Oscar, Wollny, Gert, Gorthi, Subrahmanyam, Ledesma-Carbayo, María-J., Thiran, Jean-Philippe, Santos, Andrés, and Bach-Cuadra, Meritxell
- Subjects
- *
MULTIVARIATE analysis , *BAYESIAN analysis , *IMAGE segmentation , *GAUSSIAN distribution , *MARKOV random fields , *SPLINES - Abstract
Abstract: We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan–rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
40. Fully automatic lesion segmentation in breast MRI using mean-shift and graph-cuts on a region adjacency graph.
- Author
-
McClymont, Darryl, Mehnert, Andrew, Trakic, Adnan, Kennedy, Dominic, and Crozier, Stuart
- Abstract
Purpose To present and evaluate a fully automatic method for segmentation (i.e., detection and delineation) of suspicious tissue in breast MRI. Materials and Methods The method, based on mean-shift clustering and graph-cuts on a region adjacency graph, was developed and its parameters tuned using multimodal (T1, T2, DCE-MRI) clinical breast MRI data from 35 subjects (training data). It was then tested using two data sets. Test set 1 comprises data for 85 subjects (93 lesions) acquired using the same protocol and scanner system used to acquire the training data. Test set 2 comprises data for eight subjects (nine lesions) acquired using a similar protocol but a different vendor's scanner system. Each lesion was manually delineated in three-dimensions by an experienced breast radiographer to establish segmentation ground truth. The regions of interest identified by the method were compared with the ground truth and the detection and delineation accuracies quantitatively evaluated. Results One hundred percent of the lesions were detected with a mean of 4.5 ± 1.2 false positives per subject. This false-positive rate is nearly 50% better than previously reported for a fully automatic breast lesion detection system. The median Dice coefficient for Test set 1 was 0.76 (interquartile range, 0.17), and 0.75 (interquartile range, 0.16) for Test set 2. Conclusion The results demonstrate the efficacy and accuracy of the proposed method as well as its potential for direct application across different MRI systems. It is (to the authors' knowledge) the first fully automatic method for breast lesion detection and delineation in breast MRI. J. Magn. Reson. Imaging 2014;39:795-804. © 2013 Wiley Periodicals, Inc. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
41. 3D segmentation of abdominal CT imagery with graphical models, conditional random fields and learning.
- Author
-
Bhole, Chetan, Pal, Christopher, Rim, David, and Wismüller, Axel
- Subjects
- *
IMAGE segmentation , *THREE-dimensional imaging , *COMPUTED tomography , *MACHINE learning , *PROBABILITY theory , *CONDITIONAL random fields - Abstract
Probabilistic graphical models have had a tremendous impact in machine learning and approaches based on energy function minimization via techniques such as graph cuts are now widely used in image segmentation. However, the free parameters in energy function-based segmentation techniques are often set by hand or using heuristic techniques. In this paper, we explore parameter learning in detail. We show how probabilistic graphical models can be used for segmentation problems to illustrate Markov random fields (MRFs), their discriminative counterparts conditional random fields (CRFs) as well as kernel CRFs. We discuss the relationships between energy function formulations, MRFs, CRFs, hybrids based on graphical models and their relationships to key techniques for inference and learning. We then explore a series of novel 3D graphical models and present a series of detailed experiments comparing and contrasting different approaches for the complete volumetric segmentation of multiple organs within computed tomography imagery of the abdominal region. Further, we show how these modeling techniques can be combined with state of the art image features based on histograms of oriented gradients to increase segmentation performance. We explore a wide variety of modeling choices, discuss the importance and relationships between inference and learning techniques and present experiments using different levels of user interaction. We go on to explore a novel approach to the challenging and important problem of adrenal gland segmentation. We present a 3D CRF formulation and compare with a novel 3D sparse kernel CRF approach we call a relevance vector random field. The method yields state of the art performance and avoids the need to discretize or cluster input features. We believe our work is the first to provide quantitative comparisons between traditional MRFs with edge-modulated interaction potentials and CRFs for multi-organ abdominal segmentation and the first to explore the 3D adrenal gland segmentation problem. Finally, along with this paper we provide the labeled data used for our experiments to the community. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
42. A scalable approach to T2-MRI colon segmentation
- Author
-
Pere Brunet, Álvaro Bendezú, Isabel Navazo, Bernat Orellana, Fernando Azpiroz, Eva Monclús, Universitat Politècnica de Catalunya. Doctorat en Computació, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, and Universitat Politècnica de Catalunya. ViRVIG - Grup de Recerca en Visualització, Realitat Virtual i Interacció Gràfica
- Subjects
Colon ,Computer science ,Medicina ,Pipeline (computing) ,Health Informatics ,Colon segmentation ,Algorismes ,Imatges -- Processament ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Multigrid method ,Informàtica::Aplicacions de la informàtica [Àrees temàtiques de la UPC] ,Image processing ,Region of interest ,Cut ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Ground truth ,Radiological and Ultrasound Technology ,business.industry ,Grafs, Teoria de ,Pattern recognition ,Filter (signal processing) ,Magnetic Resonance Imaging ,Computer Graphics and Computer-Aided Design ,Graph theory ,Tubularity ,Imatges mèdiques ,Scalability ,Medicine ,Graph-cuts ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Algorithms ,Imaging systems in medicine ,MRI ,Ciències de la salut [Àrees temàtiques de la UPC] - Abstract
The study of the colonic volume is a procedure with strong relevance to gastroenterologists. Depending on the clinical protocols, the volume analysis has to be performed on MRI of the unprepared colon without contrast administration. In such circumstances, existing measurement procedures are cumbersome and time-consuming for the specialists. The algorithm presented in this paper permits a quasi-automatic segmentation of the unprepared colon on T2-weighted MRI scans. The segmentation algorithm is organized as a three-stage pipeline. In the first stage, a custom tubularity filter is run to detect colon candidate areas. The specialists provide a list of points along the colon trajectory, which are combined with tubularity information to calculate an estimation of the colon medial path. In the second stage, we delimit the region of interest by applying custom segmentation algorithms to detect colon neighboring regions and the fat capsule containing abdominal organs. Finally, within the reduced search space, segmentation is performed via 3D graph-cuts in a three-stage multigrid approach. Our algorithm was tested on MRI abdominal scans, including different acquisition resolutions, and its results were compared to the colon ground truth segmentations provided by the specialists. The experiments proved the accuracy, efficiency, and usability of the algorithm, while the variability of the scan resolutions contributed to demonstrate the computational scalability of the multigrid architecture. The system is fully applicable to the colon measurement clinical routine, being a substantial step towards a fully automated segmentation.
- Published
- 2020
43. Rapid Reconstruction of a Three-Dimensional Mesh Model Based on Oblique Images in the Internet of Things
- Author
-
Li Wang, Guangyun Li, and Dongling Ma
- Subjects
mesh model ,0209 industrial biotechnology ,General Computer Science ,Computer science ,Internet of Things ,Point cloud ,02 engineering and technology ,Solid modeling ,Iterative reconstruction ,Computational science ,Octree ,020901 industrial engineering & automation ,Robustness (computer science) ,Triangle mesh ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,3D reconstruction ,oblique images ,ComputingMethodologies_COMPUTERGRAPHICS ,General Engineering ,Triangulation (social science) ,020207 software engineering ,Directed graph ,Vertex (geometry) ,Photogrammetry ,smart city ,graph-cuts ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:TK1-9971 ,Surface reconstruction - Abstract
One of the main targets of the Internet of Things (IoT) is the construction of smart cities, and many industries based on the IoT serve popular applications in a smart city. However, 3-D reconstruction constitutes a major difficulty in the construction of a smart city. In recent years, oblique photography technology has been widely applied in the rapid 3-D modeling and other aspects of smart cities. However, in the automatic construction of a 3-D mesh model for oblique photogrammetry, complex building geometries make it very difficult to construct a triangular mesh model. Therefore, a network construction method is needed that can not only effectively construct a 3-D mesh model but also address the results of auto-modeling for an oblique image. The representative network construction method is a huge triangulation network in which the constructed surface of the object does not satisfy the manifold features and it is inconvenient to optimize and edit the model, yielding a low network construction efficiency. To solve these problems, a new method for constructing a high-quality manifold mesh model is proposed in this paper. First, an adaptive octree division algorithm is used to divide the point cloud data into sub domains that cover each other. Then, a mesh reconstruction is performed in each sub domain, and an efficient mesh construction algorithm based on relabeling the vertices of the directed graph is proposed to construct the manifold mesh. Finally, a triangular facet orientation method is used to homogenize the normal vectors of the mesh. The experimental results proof that the proposed method greatly improves the mesh reconstruction, effectively reflects the model details, and possesses a strong anti-noise ability. Also, it has a good robustness and is particularly suitable for the 3-D reconstruction of large scenes and complex surfaces.
- Published
- 2018
- Full Text
- View/download PDF
44. Human limb segmentation in depth maps based on spatio-temporal Graph-cuts optimization.
- Author
-
Hernández-Vela, Antonio, Zlateva, Nadezhda, Marinov, Alexander, Reyes, Miguel, Radeva, Petia, Dimov, Dimo, and Escalera, Sergio
- Subjects
IMAGE segmentation ,GRAPH theory ,ALGORITHMS ,EXTREMITIES (Anatomy) ,DEPTH maps (Digital image processing) ,PROBABILITY theory ,PERFORMANCE evaluation ,DATA analysis ,MATHEMATICAL optimization - Abstract
We present a framework for object segmentation using depth maps based on Random Forest and Graph-cuts theory, and apply it to the segmentation of human limbs. First, from a set of random depth features, Random Forest is used to infer a set of label probabilities for each data sample. This vector of probabilities is used as unary term in α-β swap Graph-cuts algorithm. Moreover, depth values of spatio-temporal neighboring data points are used as boundary potentials. Results on a new multi-label human depth data set show high performance in terms of segmentation overlapping of the novel methodology compared to classical approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
45. Fully Automatic Segmentations of Liver and Hepatic Tumors From 3-D Computed Tomography Abdominal Images: Comparative Evaluation of Two Automatic Methods.
- Author
-
Casciaro, S., Franchini, R., Massoptier, L., Casciaro, E., Conversano, F., Malvasi, A., and Lay-Ekuakille, A.
- Abstract
An adaptive initialization method was developed to produce fully automatic processing frameworks based on graph-cut and gradient flow active contour algorithms. This method was applied to abdominal Computed Tomography (CT) images for segmentation of liver tissue and hepatic tumors. Twenty-five anonymized datasets were randomly collected from several radiology centres without specific request on acquisition parameter settings nor patient clinical situation as inclusion criteria. Resulting automatic segmentations of liver tissue and tumors were compared to their reference standard delineations manually performed by a specialist. Segmentation accuracy has been assessed through the following evaluation framework: dice similarity coefficient (DSC), false negative ratio (FNR), false positive ratio (FPR) and processing time. Regarding liver surfaces, graph-cuts achieved a DSC of 95.49% ( FPR=2.35% and FNR=5.10%), while active contours reached a DSC of 96.17% (FPR=3.35% and FNR=3.87%). The analyzed datasets presented 52 tumors: graph-cut algorithm detected 48 tumors with a DSC of 88.65%, while active contour algorithm detected only 44 tumors with a DSC of 87.10%. In addition, in terms of time performances, less time was requested for graph-cut algorithm with respect to active contour one. The implemented initialization method allows fully automatic segmentation leading to superior overall performances of graph-cut algorithm in terms of accuracy and processing time. The initialization method here presented resulted suitable and reliable for two different segmentation techniques and could be further extended. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
46. Non-rigid image registration of brain magnetic resonance images using graph-cuts
- Author
-
So, Ronald W.K., Tang, Tommy W.H., and Chung, Albert C.S.
- Subjects
- *
IMAGE registration , *MAGNETIC resonance imaging of the brain , *DIGITAL image processing , *COMBINATORIAL optimization , *SMOOTHNESS of functions , *VOXEL-based morphometry - Abstract
Abstract: We present a graph-cuts based method for non-rigid medical image registration on brain magnetic resonance images. In this paper, the non-rigid medical image registration problem is reformulated as a discrete labeling problem. Based on a voxel-to-voxel intensity similarity measure, each voxel in the source image is assigned a displacement label, which represents a displacement vector indicating which position in the floating image it is spatially corresponding to. In the proposed method, a smoothness constraint based on the first derivative is used to penalize sharp changes in the adjacent displacement labels across voxels. The image registration problem is therefore modeled by two energy terms based on intensity similarity and smoothness of the displacement field. These energy terms are submodular and can be optimized by using the graph-cuts method via , which is a powerful combinatorial optimization tool and capable of yielding either a global minimum or a local minimum in a strong sense. Using the realistic brain phantoms obtained from the Simulated Brain Database, we compare the registration results of the proposed method with two state-of-the-art medical image registration approaches: free-form deformation based method and demons method. In addition, the registration results are also compared with that of the linear programming based image registration method. It is found that the proposed method is more robust against different challenging non-rigid registration cases with consistently higher registration accuracy than those three methods, and gives realistic recovered deformation fields. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
47. 3D Archive System for Traditional Performing Arts.
- Author
-
Hisatomi, Kensuke, Katayama, Miwa, Tomiyama, Kimihiro, and Iwadate, Yuichi
- Subjects
- *
JAPANESE people , *PERFORMING arts , *ALGORITHMS , *PERFORMANCE art , *WEB archives - Abstract
We developed a 3D archive system for Japanese traditional performing arts. The system generates sequences of 3D actor models of the performances from multi-view video by using a graph-cuts algorithm and stores them with CG background models and related information. The system can show a scene from any viewpoint as follows; the 3D actor model is integrated with the background model and the integrated model is projected to a viewpoint that the user indicates with a viewpoint controller. A challenge of generating the actor models is how to reconstruct thin or slender parts. Japanese traditional costumes for performances include slender parts such as long sleeves, fans and strings that may be manipulated during the performance. The graph-cuts algorithm is a powerful 3D reconstruction tool but it tends to cut off those parts because it uses an energy-minimization process. Hence, the search for a way to reconstruct such parts is important for us to preserve these arts for future generations. We therefore devised an adaptive erosion method that works on the visual hull and applied it to the graph-cuts algorithm to extract interior nodes in the thin parts and to prevent the thin parts from being cut off. Another tendency of the reconstruction method using the graph-cuts algorithm is over-shrinkage of the reconstructed models. This arises because the energy can also be reduced by cutting inside the true surface. To avoid this tendency, we applied a silhouette-rim constraint defined by the number of the silhouette-rims passing through each node. By applying the adaptive erosion process and the silhouette-rim constraint, we succeeded in constructing a virtual performance with costumes including thin parts. This paper presents the results of the 3D reconstruction using the proposed method and some outputs of the 3D archive system. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
48. Joint Multi-Layer Segmentation and Reconstruction for Free-Viewpoint Video Applications.
- Author
-
Guillemaut, Jean-Yves and Hilton, Adrian
- Subjects
- *
IMAGE quality in imaging systems , *DIGITAL image processing , *IMAGE reconstruction , *DIGITAL technology , *HIGH resolution imaging , *IMAGE quality analysis - Abstract
Current state-of-the-art image-based scene reconstruction techniques are capable of generating high-fidelity 3D models when used under controlled capture conditions. However, they are often inadequate when used in more challenging environments such as sports scenes with moving cameras. Algorithms must be able to cope with relatively large calibration and segmentation errors as well as input images separated by a wide-baseline and possibly captured at different resolutions. In this paper, we propose a technique which, under these challenging conditions, is able to efficiently compute a high-quality scene representation via graph-cut optimisation of an energy function combining multiple image cues with strong priors. Robustness is achieved by jointly optimising scene segmentation and multiple view reconstruction in a view-dependent manner with respect to each input camera. Joint optimisation prevents propagation of errors from segmentation to reconstruction as is often the case with sequential approaches. View-dependent processing increases tolerance to errors in through-the-lens calibration compared to global approaches. We evaluate our technique in the case of challenging outdoor sports scenes captured with manually operated broadcast cameras as well as several indoor scenes with natural background. A comprehensive experimental evaluation including qualitative and quantitative results demonstrates the accuracy of the technique for high quality segmentation and reconstruction and its suitability for free-viewpoint video under these difficult conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
49. On Variational Curve Smoothing and Reconstruction.
- Author
-
Yu Wang, Desheng Wang, and Bruckstein, A. M.
- Abstract
In this paper we discuss and experimentally compare variational methods for curve denoising, curve smoothing and curve reconstruction problems. The methods are based on defining suitable cost functionals to be minimized, the cost being the combination of a fidelity term measuring the “distance” of a curve from the data and a smoothness term measuring the curve’s L
1 -norm or length. [ABSTRACT FROM AUTHOR]- Published
- 2010
- Full Text
- View/download PDF
50. Field Map Reconstruction in Magnetic Resonance Imaging Using Bayesian Estimation.
- Author
-
Baselice, Fabio, Ferraioli, Giampaolo, and Shabou, Aymen
- Subjects
- *
MAGNETIC resonance imaging , *ESTIMATION theory , *MAPS , *METHODOLOGY , *MARKOV random fields , *MAGNETIC resonance , *BAYESIAN field theory , *HOMOGENEITY , *GRAPHIC methods - Abstract
Field inhomogeneities in Magnetic Resonance Imaging (MRI) can cause blur or image distortion as they produce off-resonance frequency at each voxel. These effects can be corrected if an accurate field map is available. Field maps can be estimated starting from the phase of multiple complex MRI data sets. In this paper we present a technique based on statistical estimation in order to reconstruct a field map exploiting two or more scans. The proposed approach implements a Bayesian estimator in conjunction with the Graph Cuts optimization method. The effectiveness of the method has been proven on simulated and real data. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.