427 results on '"Linte, Cristian A."'
Search Results
2. M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks
- Author
-
Khanal, Bidur, Bhattarai, Binod, Khanal, Bishesh, Stoyanov, Danail, and Linte, Cristian A.
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.
- Published
- 2023
3. A Disparity Refinement Framework for Learning-based Stereo Matching Methods in Cross-domain Setting for Laparoscopic Images
- Author
-
Yang, Zixin, Simon, Richard, and Linte, Cristian
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Article - Abstract
Purpose: Stereo matching methods that enable depth estimation are crucial for visualization enhancement applications in computer-assisted surgery (CAS). Learning-based stereo matching methods are promising to predict accurate results on laparoscopic images. However, they require a large amount of training data, and their performance may be degraded due to domain shifts. Methods: Maintaining robustness and improving the accuracy of learning-based methods are still open problems. To overcome the limitations of learning-based methods, we propose a disparity refinement framework consisting of a local disparity refinement method and a global disparity refinement method to improve the results of learning-based stereo matching methods in a cross-domain setting. Those learning-based stereo matching methods are pre-trained on a large public dataset of natural images and are tested on two datasets of laparoscopic images. Results: Qualitative and quantitative results suggest that our proposed disparity framework can effectively refine disparity maps when they are noise-corrupted on an unseen dataset, without compromising prediction accuracy when the network can generalize well on an unseen dataset. Conclusion: Our proposed disparity refinement framework could work with learning-based methods to achieve robust and accurate disparity prediction. Yet, as a large laparoscopic dataset for training learning-based methods does not exist and the generalization ability of networks remains to be improved, the incorporation of the proposed disparity refinement framework into existing networks will contribute to improving their overall accuracy and robustness associated with depth estimation.
- Published
- 2023
4. Learning Feature Descriptors for Pre- and Intra-operative Point Cloud Matching for Laparoscopic Liver Registration
- Author
-
Yang, Zixin, Simon, Richard, and Linte, Cristian A.
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Biomedical Engineering ,Computer Science - Computer Vision and Pattern Recognition ,Health Informatics ,Radiology, Nuclear Medicine and imaging ,Surgery ,General Medicine ,Computer Vision and Pattern Recognition ,Computer Graphics and Computer-Aided Design ,Computer Science Applications - Abstract
Purpose: In laparoscopic liver surgery (LLS), pre-operative information can be overlaid onto the intra-operative scene by registering a 3D pre-operative model to the intra-operative partial surface reconstructed from the laparoscopic video. To assist with this task, we explore the use of learning-based feature descriptors, which, to our best knowledge, have not been explored for use in laparoscopic liver registration. Furthermore, a dataset to train and evaluate the use of learning-based descriptors does not exist. Methods: We present the LiverMatch dataset consisting of 16 preoperative models and their simulated intra-operative 3D surfaces. We also propose the LiverMatch network designed for this task, which outputs per-point feature descriptors, visibility scores, and matched points. Results: We compare the proposed LiverMatch network with anetwork closest to LiverMatch, and a histogram-based 3D descriptor on the testing split of the LiverMatch dataset, which includes two unseen pre-operative models and 1400 intra-operative surfaces. Results suggest that our LiverMatch network can predict more accurate and dense matches than the other two methods and can be seamlessly integrated with a RANSAC-ICP-based registration algorithm to achieve an accurate initial alignment. Conclusion: The use of learning-based feature descriptors in LLR is promising, as it can help achieve an accurate initial rigid alignment, which, in turn, serves as an initialization for subsequent non-rigid registration. We will release the dataset and code upon acceptance.
- Published
- 2022
5. On mixed reality environments for minimally invasive therapy guidance: Systems architecture, successes and challenges in their implementation from laboratory to clinic
- Author
-
Linte, Cristian A., Davenport, Katherine P., Cleary, Kevin, Peters, Craig, Vosburgh, Kirby G., Navab, Nassir, Edwards, Philip “Eddie”, Jannin, Pierre, Peters, Terry M., Holmes, David R., III, and Robb, Richard A.
- Published
- 2013
- Full Text
- View/download PDF
6. Automatic Artery/Vein Classification in 2D-DSA Images of Stroke Patients
- Author
-
Van Asperen, Vivian, Van Den Berg, Josefien, Lycklama, Fleur, Marting, Victoria, Cornelissen, Sandra, Van Zwam, Wim H., Hofmeijer, Jeanette, Van Der Lugt, Aad, Van Walsum, Theo, Van Der Sluijs, Matthijs, Su, Ruisheng, Linte, Cristian A., Siewerdsen, Jeffrey H., Radiology & Nuclear Medicine, TechMed Centre, and Clinical Neurophysiology
- Subjects
Stroke ,22/3 OA procedure ,Vessels ,Artery-vein classification ,DSA - Abstract
To develop an objective system for perfusion assessment in digital subtraction angiography (DSA), artery-vein (A/V) classification is essential. In this study, an automated A/V classification system in 2D DSA images of stroke patients is proposed. After preprocessing through vessel segmentation with a Frangi fitler and Gaussian smoothing, a time-intensity curve (TIC) of each vessel pixel was extracted and relevant parameters were calculated. Different combinations of input parameters were systematically tested to come to the optimal set of input parameters. The parameters formed the input for k-means (KM) and fuzzy c-means (FCM) clustering. Both algorithms were tested for clustering into 2 to 7 clusters. Cluster labeling was performed based on the average time to peak (TTP) of a cluster. A reference standard consisted of manually annotated DSA images of the MR CLEAN registry. Outcome measures were accuracy, true artery rate (TAR) and true vein rate (TVR). The optimal value for k was found to be 2 for both KM and FCM clustering. The optimal parameter set was: variance, standard deviation, maximal slope, peak width, time to peak, arrival time, maximal intensity and area under the TIC. No significant difference was found between FCM and KM clustering and. Both FCM and KM clustering yielded an average accuracy of 76%, average TAR of 74% and average TVR of 80%.
- Published
- 2022
7. Towards Real-time 6D Pose Estimation of Objects in Single-view Cone-beam X-ray
- Author
-
Viviers, Christiaan G.A., de Bruijn, Joël, de With, Peter H.N., van der Sommen, Fons, Filatova, Lena, Linte, Cristian A., Siewerdsen, Jeffrey H., Video Coding & Architectures, EAISI Health, Center for Care & Cure Technology Eindhoven, and Eindhoven MedTech Innovation Center
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,deep learning ,6D pose estimation ,object detection ,X-ray projection model ,Machine Learning (cs.LG) - Abstract
Deep learning-based pose estimation algorithms can successfully estimate the pose of objects in an image, especially in the field of color images. 6D Object pose estimation based on deep learning models for X-ray images often use custom architectures that employ extensive CAD models and simulated data for training purposes. Recent RGB-based methods opt to solve pose estimation problems using small datasets, making them more attractive for the X-ray domain where medical data is scarcely available. We refine an existing RGB-based model (SingleShotPose) to estimate the 6D pose of a marked cube from grayscale X-ray images by creating a generic solution trained on only real X-ray data and adjusted for X-ray acquisition geometry. The model regresses 2D control points and calculates the pose through 2D/3D correspondences using Perspective-n-Point(PnP), allowing a single trained model to be used across all supporting cone-beam-based X-ray geometries. Since modern X-ray systems continuously adjust acquisition parameters during a procedure, it is essential for such a pose estimation network to consider these parameters in order to be deployed successfully and find a real use case. With a 5-cm/5-degree accuracy of 93% and an average 3D rotation error of 2.2 degrees, the results of the proposed approach are comparable with state-of-the-art alternatives, while requiring significantly less real training examples and being applicable in real-time applications., Published at SPIE Medical Imaging 2022
- Published
- 2022
8. Tissue segmentation for workflow recognition in open inguinal hernia repair training
- Author
-
Klosa, Elizabeth, Hisey, Rebecca, Nazari, Tahmina, Wiggers, Theo, Zevin, Boris, Ungi, Tamas, Fichtinger, Gabor, Linte, Cristian A., Siewerdsen, Jeffrey H., and Surgery
- Abstract
PURPOSE: As medical education adopts a competency-based training method, experts are spending substantial amounts of time instructing and assessing trainees' competence. In this study, we look to develop a computer-assisted training platform that can provide instruction and assessment of open inguinal hernia repairs without needing an expert observer. We recognize workflow tasks based on the tool-tissue interactions, suggesting that we first need a method to identify tissues. This study aims to train a neural network in identifying tissues in a low-cost phantom as we work towards identifying the tool-tissue interactions needed for task recognition. METHODS: Eight simulated tissues were segmented throughout five videos from experienced surgeons who performed open inguinal hernia repairs on phantoms. A U-Net was trained using leave-one-user-out cross validation. The average F-score, false positive rate and false negative rate were calculated for each tissue to evaluate the U-Net's performance. RESULTS: Higher F-scores and lower false negative and positive rates were recorded for the skin, hernia sac, spermatic cord, and nerves, while slightly lower metrics were recorded for the subcutaneous tissue, Scarpa's fascia, external oblique aponeurosis and superficial epigastric vessels. CONCLUSION: The U-Net performed better in recognizing tissues that were relatively larger in size and more prevalent, while struggling to recognize smaller tissues only briefly visible. Since workflow recognition does not require perfect segmentation, we believe our U-Net is sufficient in recognizing the tissues of an inguinal hernia repair phantom. Future studies will explore combining our segmentation U-Net with tool detection as we work towards workflow recognition.
- Published
- 2022
9. A Direct High-Order Curvilinear Triangular Mesh Generation Method Using an Advancing Front Technique
- Author
-
Mohammadi, Fariba, Dangi, Shusil, Shontz, Suzanne M., and Linte, Cristian A.
- Subjects
Advancing front ,Curvilinear triangular mesh ,High-order mesh generation ,Article ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics::Numerical Analysis - Abstract
In this paper, we propose a novel method of generating high-order curvilinear triangular meshes using an advancing front approach. Our method relies on a direct approach to generate meshes on geometries with curved boundaries. Our advancing front method yields high-quality triangular elements in each iteration which omits the need for post-processing steps. We present several numerical examples of second-order curvilinear triangular meshes of patient-specific anatomical models generated using our technique on boundary meshes obtained from biomedical images.
- Published
- 2020
10. Image based registration between full x-ray and spot mammograms for x-ray guided stereotactic breast biopsy
- Author
-
Said, Sarah, Clauser, Paola, Ruiter, Nicole, Baltzer, Pascal, Hopp, Torsten, Linte, Cristian A., and Siewerdsen, Jeffrey H.
- Subjects
ddc:620 ,Engineering & allied operations - Published
- 2022
- Full Text
- View/download PDF
11. Segmentation of the mouse skull for MRI guided transcranial focused ultrasound therapy planning
- Author
-
Hopp, Torsten, Springer, Luca, Gross, Carl, Grudzenski-Theis, Saskia, Mathis-Ullrich, Franziska, Ruiter, Nicole, Linte, Cristian A., and Siewerdsen, Jeffrey H.
- Subjects
DATA processing & computer science ,ddc:004 - Abstract
For opening the blood brain barrier using focused ultrasound (FUS) to treat neurodegenerative diseases, mouse- specific therapy planning is an essential step. For our therapy planning approach based on acoustic simulations we here propose to automatically segment the mouse skull and brain from magnetic resonance imaging, which is usually used in combination with FUS for monitoring purposes. The proposed method consists of (1) pre- processing to enhance the image contrast and remove noise, (2) a rough skull segmentation using morphological operations and adaptive binarization, (3) segmentation of the brain using the established 3D-PCNN method, (4) correction of the skull segmentation using the anatomical information about the brain location and (5) a post-processing to remove obvious errors from the final skull segmentation. The method is evaluated with four in-vivo datasets obtained with different parameters. The median Matthews Correlation Coefficient (MCC) on all slices of four datasets was 0.85 for the brain segmentation, 0.69 for the overall skull segmentation and 0.78 for the skull cap. Finally for showcasing the application an acoustic simulation based on the segmentation is presented, which results in a comparable prediction of the pressure field prediction as our earlier method based on micro-CT, and lines up well with literature estimations of the ultrasound attenuation.
- Published
- 2022
- Full Text
- View/download PDF
12. Applied Sciences—Special Issue on Emerging Techniques in Imaging, Modelling and Visualization for Cardiovascular Diagnosis and Therapy.
- Author
-
Linte, Cristian A. and Pop, Mihaela
- Subjects
APPLIED sciences ,DIAGNOSIS ,CARDIAC magnetic resonance imaging ,ELECTROPORATION ,HEART ,MEDICAL specialties & specialists ,MAGNETIC resonance imaging ,CARDIAC pacing ,MYOCARDIUM - Abstract
Ongoing developments in computing and data acquisition, along with continuous advances in medical imaging technology, computational modelling, robotics and visualization have revolutionized many medical specialties and, in particular, diagnostic and interventional cardiology. As a result, the diagnosis and treatment of many cardiac conditions that previously relied on invasive tests or procedures have been reshaped by breakthroughs in medical imaging and visualization. Applied Sciences - Special Issue on Emerging Techniques in Imaging, Modelling and Visualization for Cardiovascular Diagnosis and Therapy. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
13. Learning Deep Representations of Cardiac Structures for 4D Cine MRI Image Segmentation through Semi-Supervised Learning.
- Author
-
Hasan, S. M. Kamrul and Linte, Cristian A.
- Subjects
SUPERVISED learning ,IMAGE segmentation ,DEEP learning ,MAGNETIC resonance imaging ,GENERATIVE adversarial networks ,CARDIAC magnetic resonance imaging - Abstract
Learning good data representations for medical imaging tasks ensures the preservation of relevant information and the removal of irrelevant information from the data to improve the interpretability of the learned features. In this paper, we propose a semi-supervised model—namely, combine-all in semi-supervised learning (CqSL)—to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two important tasks in medical imaging: segmentation and reconstruction. Our work is motivated by the recent progress in image segmentation using semi-supervised learning (SSL), which has shown good results with limited labeled data and large amounts of unlabeled data. A disentanglement block decomposes an input image into a domain-invariant spatial factor and a domain-specific non-spatial factor. We assume that medical images acquired using multiple scanners (different domain information) share a common spatial space but differ in non-spatial space (intensities, contrast, etc.). Hence, we utilize our spatial information to generate segmentation masks from unlabeled datasets using a generative adversarial network (GAN). Finally, to reconstruct the original image, our conditioning layer-based reconstruction block recombines spatial information with random non-spatial information sampled from the generative models. Our ablation study demonstrates the benefits of disentanglement in holding domain-invariant (spatial) as well as domain-specific (non-spatial) information with high accuracy. We further apply a structured L 2 similarity (S L 2 SIM) loss along with a mutual information minimizer (MIM) to improve the adversarially trained generative models for better reconstruction. Experimental results achieved on the STACOM 2017 ACDC cine cardiac magnetic resonance (MR) dataset suggest that our proposed (CqSL) model outperforms fully supervised and semi-supervised models, achieving an 83.2% performance accuracy even when using only 1% labeled data. We hypothesize that our proposed model has the potential to become an efficient semantic segmentation tool that may be used for domain adaptation in data-limited medical imaging scenarios, where annotations are expensive. Code, and experimental configurations will be made available publicly. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Augmented-reality visualization for improved patient positioning workflow during MR-HIFU therapy
- Author
-
Manni, Francesca, Ferrer, Cyril J., Vincent, Celine E.C., Lai, Marco, Bartels, L.W., Bos, Clemens, van der Sommen, Fons, de With, Peter H.N., Linte, Cristian A., Siewerdsen, Jeffrey H., Center for Care & Cure Technology Eindhoven, Eindhoven MedTech Innovation Center, Video Coding & Architectures, and EAISI Health
- Subjects
Image-Guided Therapy ,Computer science ,Patient Tracking ,medicine.medical_treatment ,0206 medical engineering ,02 engineering and technology ,Augmented reality ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,MR-HIFU ,medicine ,Image fusion ,Computer vision ,Radiation treatment planning ,business.industry ,020601 biomedical engineering ,Patient tracking ,High-intensity focused ultrasound ,Visualization ,Workflow ,Artificial intelligence ,business - Abstract
MR-guided high-intensity focused ultrasound (MR-HIFU) is a non-invasive therapeutic technology which has demonstrated clinical potential for tissue ablation. The application of this therapeutic approach facilitated to be a promising option to achieve faster pain palliation in patients with bone metastasis. However, its clinical adoption is still hampered by a lack of workflow integration. Currently, to ensure sufficient positioning, MR images have to be repeatedly acquired in between patient re-positioning tasks, leading to a time-consuming preparation phase of at least 30 minutes, involving extra costs and time to the available treatment time. Augmented reality (AR) is a promising technology that enables the fusion of medical images, such as MRI, with the view of an external camera. AR represents a valid tool for a faster localization and visualization of the lesion during positioning. The aim of this work is the implementation of a novel AR setup for accelerating the patient positioning during MRHIFU treatments by enabling adequate target positioning outside the MRI scanner. A marker-based approach was investigated for fusing the MR data with video data for providing an augmented view. Initial experiments on four volunteers show that MR images were overlaid on the camera views with an average re-projection error of 3.13 mm, which matches the clinical requirements for this specific application. It can be concluded that the implemented system is suitable for MR-HIFU procedures and supports its clinical adoption by improving the patient positioning, thereby offering potential for faster treatment time.
- Published
- 2021
15. Hyperspectral imaging for tissue classification in glioblastoma tumor patients: a deep spectral-spatial approach
- Author
-
Manni, Francesca, Cai, Chuchen, van der Sommen, Fons, Zinger, Sveta, Shan, Caifeng, Edström, Erik, Elmi-Terander, Adrian, Fabelo, Himar, Ortega, Samuel, Marrero Callicó, Gustavo, de With, Peter H.N., Linte, Cristian A., Siewerdsen, Jeffrey H., Center for Care & Cure Technology Eindhoven, Eindhoven MedTech Innovation Center, Video Coding & Architectures, Electrical Engineering, Signal Processing Systems, Biomedical Diagnostics Lab, and EAISI Health
- Subjects
medicine.medical_specialty ,Hyperspectral imaging ,business.industry ,Tumor resection ,Context (language use) ,medicine.disease ,Gross Total Resection ,Malignant brain tumor ,Support vector machine ,Ant colony optimization ,Optical imaging ,Medicine ,Radiology ,Frozen tissue ,business ,3D-2D convolutional neural network (CNN) ,Glioblastoma - Abstract
Surgery is a crucial treatment for malignant brain tumors where gross total resection improves the prognosis. Tissue samples taken during surgery are either subject to a preliminary intraoperative histological analysis, or sent for a full pathological evaluation which can take days or weeks. Whereas a lengthy complete pathological analysis includes an array of techniques to be executed, a preliminary tissue analysis on frozen tissue is performed as quickly as possible (30-45 minutes on average) to provide fast feedback to the surgeon during the surgery. The surgeon uses the information to confirm that the resected tissue is indeed tumor and may, at least in theory, initiate repeated biopsies to help achieve gross total resection. However, due to the total turn-around time of the tissue inspection for repeated analyses, this approach may not be feasible during a single surgery. In this context, intraoperative image-guided techniques can improve the clinical workflow for tumor resection and improve outcome by aiding in the identification and removal of the malignant lesion. Hyperspectral imaging (HSI) is an optical imaging technique with the potential to extract combined spectral-spatial information. By exploiting HSI for human brain-tissue classification in 13 in-vivo hyperspectral images from 9 patients, a brain-tissue classifier is developed. The framework consists of a hybrid 3D-2D CNN-based approach and a band-selection step to enhance the capability of extracting both spectral and spatial information from the hyperspectral images. An overall accuracy of 77% was found when tumor, normal and hyper-vascularized tissue are classified, which clearly outperforms the state-of-the-art approaches (SVM, 2D-CNN). These results may open an attractive future perspective for intraoperative brain-tumor classification using HSI.
- Published
- 2021
16. A feature-based affine registration method for capturing background lung tissue deformation for ground glass nodule tracking.
- Author
-
Ben-Zikri, Yehuda K., Helguera, María, Fetzer, David, Shrier, David A., Aylward, Stephen. R., Chittajallu, Deepak, Niethammer, Marc, Cahill, Nathan D., and Linte, Cristian A.
- Subjects
LUNGS ,RECORDING & registration ,PULMONARY nodules ,COMPUTED tomography ,GLASS ,TISSUES - Abstract
Apparent changes in lung nodule size assessed via simple image-based measurements from computed tomography (CT) images may be compromised by the effect of the background lung tissue deformation on the nodule, leading to erroneous nodule tracking. We propose a feature-based affine registration method and study its performance vis-a-vis several other registration methods. We implement and test each registration method using a lung- and a lesion-centred region of interest on 10 patient CT datasets featuring 12 nodules. We evaluate each registration method according to the target registration error (TRE) computed across 30–50 homologous fiducial landmarks selected by expert radiologists. Our results show that the proposed feature-based affine lesion-centred registration yielded a 1.11.2 mm TRE, while a Symmetric Normalisation deformable registration yielded a 1.21.2 mm TRE, with a baseline least-square fit of the validation fiducial landmarks of 1.51.2 mm TRE. The proposed feature-based affine registration is computationally efficient, eliminates the need for nodule segmentation, and reduces the susceptibility of artificial deformations. We also conducted a pilot pre-clinical study that showed the proposed feature-based lesion-centred affine registration effectively compensates for the background lung tissue deformation and serves as a reliable baseline registration method prior to assessing lung nodule changes due to disease. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Accuracy considerations in image-guided cardiac interventions: experience and lessons learned
- Author
-
Linte, Cristian A., Lang, Pencilla, Rettmann, Maryam E., Cho, Daniel S., Holmes, III, David R., Robb, Richard A., and Peters, Terry M.
- Published
- 2012
- Full Text
- View/download PDF
18. Inside the beating heart: an in vivo feasibility study on fusing pre- and intra-operative imaging for minimally invasive therapy
- Author
-
Linte, Cristian A., Moore, John, Wedlake, Chris, Bainbridge, Daniel, Guiraudon, Gérard M., Jones, Douglas L., and Peters, Terry M.
- Published
- 2009
- Full Text
- View/download PDF
19. Chapter 28 - Interventional imaging: Ultrasound
- Author
-
Hacihaliloglu, Ilker, Chen, Elvis C.S., Mousavi, Parvin, Abolmaesumi, Purang, Boctor, Emad, and Linte, Cristian A.
- Published
- 2020
- Full Text
- View/download PDF
20. Papers from the 17th Joint Workshop on Augmented Environments for Computer Assisted Interventions at MICCAI 2023: Guest Editors' Foreword.
- Author
-
Linte, Cristian A., Yaniv, Ziv, Chen, Elvis, Dou, Qi, Drouin, Simon, Kalia, Megha, Kersten‐Oertel, Marta, McLeod, Jonathan, and Sarikaya, Duygu
- Published
- 2024
- Full Text
- View/download PDF
21. L-CO-Net: Learned Condensation-Optimization Network for Clinical Parameter Estimation from Cardiac Cine MRI
- Author
-
Hasan, S. M. Kamrul and Linte, Cristian A.
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
In this work, we implement a fully convolutional segmenter featuring both a learned group structure and a regularized weight-pruner to reduce the high computational cost in volumetric image segmentation. We validated our framework on the ACDC dataset featuring one healthy and four pathology groups imaged throughout the cardiac cycle. Our technique achieved Dice scores of 96.8% (LV blood-pool), 93.3% (RV blood-pool) and 90.0% (LV Myocardium) with five-fold cross-validation and yielded similar clinical parameters as those estimated from the ground truth segmentation data. Based on these results, this technique has the potential to become an efficient and competitive cardiac image segmentation tool that may be used for cardiac computer-aided diagnosis, planning, and guidance applications., 6 pages, 5 figures, IEEE Conference. arXiv admin note: text overlap with arXiv:2004.02249
- Published
- 2020
22. CondenseUNet: A Memory-Efficient Condensely-Connected Architecture for Bi-ventricular Blood Pool and Myocardium Segmentation
- Author
-
Hasan, S. M. Kamrul and Linte, Cristian A.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
With the advent of Cardiac Cine Magnetic Resonance (CMR) Imaging, there has been a paradigm shift in medical technology, thanks to its capability of imaging different structures within the heart without ionizing radiation. However, it is very challenging to conduct pre-operative planning of minimally invasive cardiac procedures without accurate segmentation and identification of the left ventricle (LV), right ventricle (RV) blood-pool, and LV-myocardium. Manual segmentation of those structures, nevertheless, is time-consuming and often prone to error and biased outcomes. Hence, automatic and computationally efficient segmentation techniques are paramount. In this work, we propose a novel memory-efficient Convolutional Neural Network (CNN) architecture as a modification of both CondenseNet, as well as DenseNet for ventricular blood-pool segmentation by introducing a bottleneck block and an upsampling path. Our experiments show that the proposed architecture runs on the Automated Cardiac Diagnosis Challenge (ACDC) dataset using half (50%) the memory requirement of DenseNet and one-twelfth (~ 8%) of the memory requirements of U-Net, while still maintaining excellent accuracy of cardiac segmentation. We validated the framework on the ACDC dataset featuring one healthy and four pathology groups whose heart images were acquired throughout the cardiac cycle and achieved the mean dice scores of 96.78% (LV blood-pool), 93.46% (RV blood-pool) and 90.1% (LV-Myocardium). These results are promising and promote the proposed methods as a competitive tool for cardiac image segmentation and clinical parameter estimation that has the potential to provide fast and accurate results, as needed for pre-procedural planning and/or pre-operative applications., 7 pages, 3 figures
- Published
- 2020
23. Automated classification of brain tissue: comparison between hyperspectral imaging and diffuse reflectance spectroscopy
- Author
-
Lai, Marco, Skyrman, Simon, Shan, Caifeng, Paulussen, Elvira, Manni, Francesca, Swamy, A., Babic, Drazenko, Edstrom, Erik, Persson, Oscar, Burstrom, Gustav, Elmi-Terander, Adrian, Hendriks, B.H.W., De With, Peter H.N., Fei, Baowei, Linte, Cristian A., Video Coding & Architectures, Center for Care & Cure Technology Eindhoven, and EAISI Health
- Subjects
Spectral signature ,Brain surgery ,Contextual image classification ,Diffuse reflectance infrared fourier transform ,Hyperspectral imaging ,Computer science ,Image classification ,010401 analytical chemistry ,Neurosurgery ,Tissue classification ,020206 networking & telecommunications ,02 engineering and technology ,Brain tissue ,01 natural sciences ,0104 chemical sciences ,Support vector machine ,Image-guided surgery ,Machine learning ,0202 electrical engineering, electronic engineering, information engineering ,Diffuse reflectance spectroscopy ,Image sensor ,Biomedical engineering - Abstract
In neurosurgery, technical solutions for visualizing the border between healthy brain and tumor tissue is of great value, since they enable the surgeon to achieve gross total resection while minimizing the risk of damage to eloquent areas. By using real-time non-ionizing imaging techniques, such as hyperspectral imaging (HSI), the spectral signature of the tissue is analyzed allowing tissue classification, thereby improving tumor boundary discrimination during surgery. More particularly, since infrared penetrates deeper in the tissue than visible light, the use of an imaging sensor sensitive to the near-infrared wavelength range would also allow the visualization of structures slightly beneath the tissue surface. This enables the visualization of tumors and vessel boundaries prior to surgery, thereby preventing the damaging of tissue structures. In this study, we investigate the use of Diffuse Reflectance Spectroscopy (DRS) and HSI for brain tissue classification, by extracting spectral features from the near infra-red range. The applied method for classification is the linear Support Vector Machine (SVM). The study is conducted on ex-vivo porcine brain tissue, which is analyzed and classified as either white or gray matter. The DRS combined with the proposed classification reaches a sensitivity and specificity of 96%, while HSI reaches a sensitivity of 95% and specificity of 93%. This feasibility study shows the potential of DRS and HSI for automated tissue classification, and serves as a fjrst step towards clinical use for tumor detection deeper inside the tissue.
- Published
- 2020
24. Value based decision support to prioritize development of innovative technologies for image-guided vascular surgery in the hybrid operating theater
- Author
-
Heslinga, Friso G., Koffijberg, Hendrik, Geelkerken, Robert H., Meerwaldt, Robbert, Ter Mors, Thijs G., Doggen, Carine J.M., Hummel, Marjan, Fei, Baowei, Linte, Cristian A., Health Technology & Services Research, and Multi-Modality Medical Imaging
- Subjects
Decision support system ,Relative value ,Prioritization ,Computer science ,Emerging technologies ,Multiple-criteria decision analysis ,Innovative technologies ,Image-guided surgery ,Operating theater ,Hybrid operating theater ,Multi-criteria decision analysis ,Scale (social sciences) ,Operations management ,Decision analysis - Abstract
Innovative technologies for minimally invasive interventions have the potential to add value to vascular procedures in the hybrid operating theater (HOT). Restricted budgets require prioritization of the development of these technologies. We aim to provide vascular surgeons with a structured methodology to incorporate possibly conflicting criteria in prioritizing the development of new technologies. We propose a multi-criteria decision analysis framework to evaluate the value of innovative technologies for the HOT based on the MACBETH methodology. The framework is applied to a specific case: the new HOT in a large teaching hospital. Three upcoming innovations are scored for three different endovascular procedures. Two vascular surgeons scored the expected performance of these innovations for each of the procedures on six performance criteria and weighed the importance of these criteria. The overall value of the innovations was calculated as the weighted average of the performance scores. On a scale from 0-100 describing the overall value, the current HOT scored halfway the scale (49.9). A wound perfusion measurement tool scored highest (69.1) of the three innovations, mainly due to the relatively high score for crural revascularization procedures (72). The novel framework could be used to determine the relative value of innovative technologies for the HOT. When development costs are assumed to be similar, and a single budget holder decides on technology development, priority should be given to the development of a wound perfusion measurement tool.
- Published
- 2020
25. Feasibility study of catheter segmentation in 3D Frustum Ultrasounds by DCNN
- Author
-
Min, Lan, Yang, Hongxu, Shan, Caifeng, Kolen, Alexander F., de With, Peter, Fei, Baowei, Linte, Cristian A., Video Coding & Architectures, Center for Care & Cure Technology Eindhoven, and EAISI Health
- Subjects
Frustum ,medicine.diagnostic_test ,Computer science ,business.industry ,Bandwidth (signal processing) ,Ex-vivo dataset ,Filter (signal processing) ,computer.software_genre ,Catheter segmentation ,Catheter ,Voxel ,3D Frustum ultrasound ,Sonographer ,medicine ,Segmentation ,Computer vision ,3D ultrasound ,DCNN ,Artificial intelligence ,business ,computer - Abstract
Nowadays, 3D ultrasound (US) has been employed rapidly in medical intervention therapies, such as cardiac catheterization. To efficiently interpret 3D US images and localize the catheter during the surgery, an experienced sonographer is required. As a consequence, image-based catheter detection can be a benefit to sonographer to localize the instrument in the 3D US images timely. Conventionally, the 3D imaging methods are based on the Cartesian domain, which is limited by bandwidth and information lose when it is converted from the original acquisition space-Frustum domain. The exploration of catheter segmentation in Frustum space helps to reduce the computational cost and improve efficiency. In this paper, we present a catheter segmentation method in 3D Frustum image via a deep convolutional network (DCNN). To better describe 3D information and reduce the complexity of DCNN, cross-planes with spatial gaps are extracted for each voxel. Then, the cross-planes of the voxel are processed by the DCNN to distinguish it, whether it is a catheter voxel or not. To accelerate the prediction efficiency on whole US Frustum volume, a filter-based pre-selection is applied to reduce the computational cost of the DCNN. Based on experiments on the ex-vivo dataset, our proposed method can segment the catheter in Frustum images with 0.67 Dice score within 3 seconds, which indicates the possibility of real-time application.
- Published
- 2020
26. Investigating Perioperative Heart Migration During Robot-Assisted Coronary Artery Bypass Grafting Interventions
- Author
-
Linte, Cristian A., Cho, Daniel S., Wedlake, Chris, Moore, John, Chen, Elvis, Bainbridge, Daniel, Patel, Rajni, Peters, Terry, and Kiaii, Bob B.
- Published
- 2011
- Full Text
- View/download PDF
27. Augmented Reality Image Guidance During Off-Pump Mitral Valve Replacement Through the Guiraudon Universal Cardiac Introducer
- Author
-
Guiraudon, Gerard M., Jones, Douglas L., Bainbridge, Daniel, Linte, Cristian, Pace, Danielle, Moore, John, Wedlake, Christopher, Lang, Pencilla, and Peters, Terry M.
- Published
- 2010
- Full Text
- View/download PDF
28. Off-Pump Atrial Septal Defect Closure Using the Universal Cardiac Introducer®: Creation of Models of Atrial Septal Defects in the Pig Access and Surgical Technique
- Author
-
Guiraudon, Gerard M., Jones, Douglas L., Bainbridge, Daniel, Moore, John T., Wedlake, Chris, Linte, Cristian, Wiles, Andrew, and Peters, Terry M.
- Published
- 2009
- Full Text
- View/download PDF
29. Automated tumor assessment of squamous cell carcinoma on tongue cancer patients with hyperspectral imaging
- Author
-
Manni, Francesca, van der Sommen, Fons, Zinger, Sveta, Kho, Esther, Brouwer de Koning, Susan, Ruers, Theo, Shan, Caifeng, Schleipen, Jean, de With, Peter, Fei, Baowei, Linte, Cristian A., Video Coding & Architectures, Center for Care & Cure Technology Eindhoven, Signal Processing Systems, Biomedical Diagnostics Lab, Technical Medicine, and Nanobiophysics
- Subjects
Larynx ,Nasal cavity ,Tongue cancer ,medicine.medical_specialty ,Support vector machine ,image-guided surgery ,Hyperspectral imaging ,business.industry ,Image classification ,Head and neck cancer ,Cancer ,SDG 3 – Goede gezondheid en welzijn ,medicine.disease ,Cancer detection ,medicine.anatomical_structure ,Image-guided surgery ,SDG 3 - Good Health and Well-being ,Surgical oncology ,Tongue ,intraoperative tumor detection ,medicine ,Radiology ,business - Abstract
Head and neck cancer (HNC) includes cancers in the oral/nasal cavity, pharynx, larynx, etc., and it is the sixth most common cancer worldwide. The principal treatment is surgical removal where a complete tumor resection is crucial to reduce the recurrence and mortality rate. Intraoperative tumor imaging enables surgeons to objectively visualize the malignant lesion to maximize the tumor removal with healthy safe margins. Hyperspectral imaging (HSI) is an emerging imaging modality for cancer detection, which can augment surgical tumor inspection, currently limited to subjective visual inspection. In this paper, we aim to investigate HSI for automated cancer detection during image-guided surgery, because it can provide quantitative information about light interaction with biological tissues and exploit the potential for malignant tissue discrimination. The proposed solution forms a novel framework for automated tongue-cancer detection, explicitly exploiting HSI, which particularly uses the spectral variations in specific bands describing the cancerous tissue properties. The method follows a machine-learning based classification, employing linear support vector machine (SVM), and offers a superior sensitivity and a significant decrease in computation time. The model evaluation is on 7 ex-vivo specimens of squamous cell carcinoma of the tongue, with known histology. The HSI combined with the proposed classification reaches a sensitivity of 94%, specificity of 68% and area under the curve (AUC) of 92%. This feasibility study paves the way for introducing HSI as a non-invasive imaging aid for cancer detection and increase of the effectiveness of surgical oncology.
- Published
- 2019
30. Fully resolved simulation and ultrasound flow studies in stented carotid aneurysm model
- Author
-
Mikhal, J., Hoving, A.M., Ong, G.M., Slump, C.H., Fei, Baowei, and Linte, Cristian A.
- Subjects
Computer simulation ,Computer science ,medicine.medical_treatment ,Stent ,Blood flow ,Immersed boundary method ,medicine.disease ,Aneurysm ,Visualization ,Ultrasound PIV ,Particle image velocimetry ,Flow (mathematics) ,medicine ,Numerical simulations ,cardiovascular diseases ,Flow-diverting stent ,Carotid artery ,Biomedical engineering ,3d printed geometry - Abstract
Introduction. Treatment choice for extracranial carotid artery widening, also called aneurysm, is difficult. Blood flow simulation and experimental visualization can be supportive in clinical decision making and patient-specific treatment prediction. This study aims to simulate and validate the effect of flow-diverting stent placement on blood flow characteristics using numerical and in vitro simulation techniques in simplified carotid artery and aneurysm models. Methods. We have developed a workflow from geometry design to flow simulations and in vitro measurements in a carotid aneurysm model. To show feasibility of the numerical simulation part of the workflow that uses an immersed boundary method, we study a model geometry of an extracranial carotid artery aneurysm and put a flow-diverting stent in the aneurysm. We use ultrasound particle image velocimetry (PIV) to visualize experimentally the flow inside the aneurysm model. Results. Feasibility of ultrasound visualization of the flow, virtual flow-diverting stent placement and numerical flow simulation are presented. Flow is resolved to scales much smaller than the cross section of individual wires of the flow-diverting stent. Numerical analysis in stented model introduced 25% reduction of the blood flow inside the aneurysm sac. Quantitative comparison of experimental and numerical results showed agreement in 1D velocity profiles. Discussion/conclusion. We find good numerical convergence of the simulations at appropriate spatial resolutions using the immersed boundary method. This allows us to quantify the changes in the flow in model geometries after deploying a flow-diverting stent. We visualized the physiological blood flow in a 1-to-1 aneurysm model, using PIV, showing a good correspondence to the numerical simulations. The novel workflow enables numerical as well as experimental flow simulations in patient-specific cases before and after flow-diverting stent placement. This may contribute to endovascular treatment prediction.
- Published
- 2019
31. U-NetPlus: A Modified Encoder-Decoder U-Net Architecture for Semantic and Instance Segmentation of Surgical Instrument
- Author
-
Hasan, S. M. Kamrul and Linte, Cristian A.
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Conventional therapy approaches limit surgeons' dexterity control due to limited field-of-view. With the advent of robot-assisted surgery, there has been a paradigm shift in medical technology for minimally invasive surgery. However, it is very challenging to track the position of the surgical instruments in a surgical scene, and accurate detection & identification of surgical tools is paramount. Deep learning-based semantic segmentation in frames of surgery videos has the potential to facilitate this task. In this work, we modify the U-Net architecture named U-NetPlus, by introducing a pre-trained encoder and re-design the decoder part, by replacing the transposed convolution operation with an upsampling operation based on nearest-neighbor (NN) interpolation. To further improve performance, we also employ a very fast and flexible data augmentation technique. We trained the framework on 8 x 225 frame sequences of robotic surgical videos, available through the MICCAI 2017 EndoVis Challenge dataset and tested it on 8 x 75 frame and 2 x 300 frame videos. Using our U-NetPlus architecture, we report a 90.20% DICE for binary segmentation, 76.26% DICE for instrument part segmentation, and 46.07% for instrument type (i.e., all instruments) segmentation, outperforming the results of previous techniques implemented and tested on these data., 7 pages, 6 figures, IEEE conference submission
- Published
- 2019
32. Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography
- Author
-
Laves, Max-Heinrich, Ihler, Sontje, Kahrs, Lüder A., Ortmaier, Tobias, Fei, Baowei, and Linte, Cristian A.
- Subjects
Microsurgery ,genetic structures ,Computer science ,medicine.medical_treatment ,Dewey Decimal Classification::600 | Technik::620 | Ingenieurwissenschaften und Maschinenbau ,Optical flow ,computer.software_genre ,law.invention ,Optical coherence tomography ,Voxel ,law ,medicine ,Computer vision ,Konferenzschrift ,Laser ablation ,medicine.diagnostic_test ,business.industry ,Orientation (computer vision) ,Tracking ,Ablation ,Laser ,Data set ,Cochlear implantation ,Scene flow ,Artificial intelligence ,ddc:620 ,business ,computer ,Laser control - Abstract
In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control. © 2019 SPIE.
- Published
- 2019
33. PO-718-08 PREDICTING TISSUE CONDUCTANCE CHANGES AND ABLATION LESION PATTERNS USING A QUASI-DYNAMIC PULSED FIELD ELECTROPORATION NUMERICAL MODEL FOR CARDIAC ABLATION
- Author
-
Mehta, Nishaki, Simon, Richard, Shah, Kuldeep, Haines, David E., and Linte, Cristian
- Published
- 2022
- Full Text
- View/download PDF
34. Therapeutic Systems and Technologies: State-of-the-Art Applications, Opportunities, and Challenges.
- Author
-
Almekkawy, Mohamed, Zderic, Vesna, Chen, Jie, Ellis, Michael D., Haemmerich, Dieter, Holmes III, David R., Linte, Cristian A., Panescu, Dorin, Pearce, John, and Prakash, Punit
- Abstract
In this review, we present current state-of-the-art developments and challenges in the areas of thermal therapy, ultrasound tomography, image-guided therapies, ocular drug delivery, and robotic devices in neurorehabilitation. Additionally, intellectual property and regulatory aspects pertaining to therapeutic systems and technologies are addressed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
35. A distance map regularized CNN for cardiac cine MR image segmentation.
- Author
-
Dangi, Shusil, Linte, Cristian A., and Yaniv, Ziv
- Subjects
- *
MAGNETIC resonance imaging , *ARTIFICIAL neural networks , *CARDIOGRAPHIC tomography , *BLOOD volume , *IMAGE segmentation , *WEIGHT loss - Abstract
Purpose: Cardiac image segmentation is a critical process for generating personalized models of the heart and for quantifying cardiac performance parameters. Fully automatic segmentation of the left ventricle (LV), the right ventricle (RV), and the myocardium from cardiac cine MR images is challenging due to variability of the normal and abnormal anatomy, as well as the imaging protocols. This study proposes a multi‐task learning (MTL)‐based regularization of a convolutional neural network (CNN) to obtain accurate segmenation of the cardiac structures from cine MR images. Methods: We train a CNN network to perform the main task of semantic segmentation, along with the simultaneous, auxiliary task of pixel‐wise distance map regression. The network also predicts uncertainties associated with both tasks, such that their losses are weighted by the inverse of their corresponding uncertainties. As a result, during training, the task featuring a higher uncertainty is weighted less and vice versa. The proposed distance map regularizer is a decoder network added to the bottleneck layer of an existing CNN architecture, facilitating the network to learn robust global features. The regularizer block is removed after training, so that the original number of network parameters does not change. The trained network outputs per‐pixel segmentation when a new patient cine MR image is provided as an input. Results: We show that the proposed regularization method improves both binary and multi‐class segmentation performance over the corresponding state‐of‐the‐art CNN architectures. The evaluation was conducted on two publicly available cardiac cine MRI datasets, yielding average Dice coefficients of 0.84 ± 0.03 and 0.91 ± 0.04. We also demonstrate improved generalization performance of the distance map regularized network on cross‐dataset segmentation, showing as much as 42% improvement in myocardium Dice coefficient from 0.56 ± 0.28 to 0.80 ± 0.14. Conclusions: We have presented a method for accurate segmentation of cardiac structures from cine MR images. Our experiments verify that the proposed method exceeds the segmentation performance of three existing state‐of‐the‐art methods. Furthermore, several cardiac indices that often serve as diagnostic biomarkers, specifically blood pool volume, myocardial mass, and ejection fraction, computed using our method are better correlated with the indices computed from the reference, ground truth segmentation. Hence, the proposed method has the potential to become a non‐invasive screening and diagnostic tool for the clinical assessment of various cardiac conditions, as well as a reliable aid for generating patient specific models of the cardiac anatomy for therapy planning, simulation, and guidance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
36. Chapter 4 - Image-Guided Procedures: Tools, Techniques, and Clinical Applications
- Author
-
Linte, Cristian A., Moore, John T., Chen, Elvis C.S., and Peters, Terry M.
- Published
- 2016
- Full Text
- View/download PDF
37. Left Ventricular Ejection Fraction Assessment: Unraveling the Bias between Area- and Volume-based Estimates.
- Author
-
Dawei Liu, Peck, Isabelle, Dangi, Shusil, Schwarz, Karl Q., and Linte, Cristian A.
- Published
- 2019
- Full Text
- View/download PDF
38. Contributors
- Author
-
Abolmaesumi, Purang, Ahmad, Sahar, Anderson, William S., Asman, Andrew J., Ben-Cohen, Avi, Biswas, Pradipta, Boctor, Emad, Bodenstedt, Sebastian, Breton, Elodie, Cao, Xiaohuan, Carass, Aaron, Cardoso, M. Jorge, Castillo Tovar, Jose M., Chen, Elvis C.S., Chen, Hao, Chen, Zhi, Cheng, Jie-Zhi, Cootes, T.F., Davatzikos, Christos, de Ribaupierre, Sandrine, Dewey, Blake E., DiPietro, Robert, Dong, Pei, Dou, Qi, Eagleson, Roy, Elson, Daniel S., Erus, Guray, Esposito, M., Essert, Caroline, Fan, Jingfan, Fenster, Aaron, Fichtinger, Gabor, Gangi, Afshin, Garnon, Julien, Gilbert, Kathleen, Glocker, Ben, Greenspan, Hayit, Habes, Mohamad, Hacihaliloglu, Ilker, Hager, Gregory D., Hammernik, Kerstin, Heinrich, Mattias P., Heng, Pheng-Ann, Hennersperger, C., Hensen, Bennet, Holden, Matthew, Huo, Yuankai, Ilse, Maximilian, Išgum, Ivana, Joskowicz, Leo, Judy, Brendan F., Kägebein, Urte, Kamnitsas, Konstantinos, Kashyap, Satyananda, Kazanzides, Peter, Klein, Stefan, Knoll, Florian, Konukoglu, Ender, Kulkarni, Pankaj Pramod, Landman, Bennett A., Lasso, Andras, Ledig, Christian, Lee, Kyungmoo, Linte, Cristian A., Lu, Le, Lyu, Ilwoo, Maier-Hein, Lena, Metaxas, Dimitris N., Miller, Karol, Modat, Marc, Mori, Kensaku, Moriakov, Nikita, Mousavi, Parvin, Navab, N., Niessen, Wiro J., Ortiz, Marilu, Ourselin, Sebastien, Papież, Bartłomiej W., Peng, Yifan, Pham, Dzung L., Pontre, Beau, Prince, Jerry L., Qi, Xiaojuan, Rothgang, Eva, Roy, Snehashis, Schafer, Sebastian, Shen, Dinggang, Sikander, Sakura, Siewerdsen, Jeffrey H., Song, Sang-Eun, Sonka, Milan, Speidel, Stefanie, Starmans, Martijn P.A., Stefan, P., Stoyanov, Danail, Sudre, Carole H., Suinesiaputra, Avan, Taylor, Russell H., Teuwen, Jonas, Tomczak, Jakub M., Traub, J., Ungi, Tamas, van der Voort, Sebastian R., Vasconcelos, Francisco, Vedula, S. Swaroop, Veenland, Jifke F., Wacker, Frank K., Wang, Xiaosong, Welling, Max, Wittek, Adam, Wolterink, Jelmer M., Xiong, Tao, Xu, Daguang, Xu, Zhoubing, Yan, Zhennan, Yang, Dong, Yang, Lin, Yap, Pew-Thian, Young, Alistair A., Zevin, Boris, Zhang, Honghai, Zhang, Zizhao, Zhao, Can, Zheng, Yefeng, and Zhou, S. Kevin
- Published
- 2020
- Full Text
- View/download PDF
39. Toward modeling the effects of regional material properties on the wall stress distribution of abdominal aortic aneurysms.
- Author
-
Jalalahmadi, Golnaz, Helguera, María, Mix, Doran S., and Linte, Cristian A.
- Published
- 2018
- Full Text
- View/download PDF
40. Cine Cardiac MRI Slice Misalignment Correction Towards Full 3D Left Ventricle Segmentation.
- Author
-
Dangi, Shusil, Linte, Cristian A., and Yaniv, Ziv
- Published
- 2018
- Full Text
- View/download PDF
41. Technical Note: On Cardiac Ablation Lesion Visualization for Image-guided Therapy Monitoring.
- Author
-
Linte, Cristian A., Camp, Jon J., Rettmann, Maryam E., Haemmerich, Dieter, Aktas, Mehmet K., Huang, David T., Packer, Douglas L., and Holmes III, David R.
- Published
- 2018
- Full Text
- View/download PDF
42. A marker-free registration method for standing X-ray panorama reconstruction for hip-knee-ankle axis deformity assessment.
- Author
-
Ben-Zikri, Yehuda K., Yaniv, Ziv R., Baum, Karl, and Linte, Cristian A.
- Subjects
BIOLOGICAL tags ,KNEE abnormalities ,HIP joint abnormalities ,X-ray imaging ,ANKLE abnormalities ,KNEE physiology - Abstract
Accurate measurement of knee alignment, quantified by the hip-knee-ankle (HKA) angle (varus-valgus), serves as an essential biomarker in the diagnosis of various orthopaedic conditions and selection of appropriate therapies. Such angular deformities are assessed from standing X-ray panoramas. However, the limited field-of-view of traditional X-ray imaging systems necessitates the acquisition of several sector images to capture an individual's standing posture, and their subsequent 'stitching' to reconstruct a panoramic image. Such panoramas are typically constructed manually by an X-ray imaging technician, often using various external markers attached to the individual's clothing and visible in two adjacent sector images. To eliminate human error, user-induced variability, improve consistency and reproducibility, and reduce the time associated with the traditional manual 'stitching' protocol, here we propose an automatic panorama construction method that only relies on anatomical features reliably detected in the images, eliminating the need for any external markers or manual input from the technician. The method first performs a rough segmentation of the femur and the tibia, then the sector images are registered by evaluating a distance metric between the corresponding bones along their medial edge. The identified translations are then used to generate the standing panorama image. The method was evaluated on 95 patient image datasets from a database of X-ray images acquired across 10 clinical sites as part of the screening process for a multi-site clinical trial. The panorama reconstruction parameters yielded by the proposed method were compared to those used for the manual panorama construction, which served as gold-standard. The horizontal translation differences were mm mm for the femur and tibia respectively, while the vertical translation differences were mm and mm for the femur and tibia, respectively. Our results showed no statistically significant differences between the HKA angles measured using the automated vs. the manually generated panoramas, and also led to similar decisions with regards to the patient inclusion/exclusion in the clinical trial. Thus, the proposed method was shown to provide comparable performance to manual panorama construction, with increased efficiency, consistency and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
43. Deep learning architecture for 3D image super-resolution of late gadolinium enhanced cardiac MRI.
- Author
-
Upendra, Roshan Reddy, Simon, Richard, and Linte, Cristian A.
- Published
- 2023
- Full Text
- View/download PDF
44. Integrating Atlas and Graph Cut Methods for Left Ventricle Segmentation from Cardiac Cine MRI.
- Author
-
Dangi, Shusil, Cahill, Nathan, and Linte, Cristian A.
- Published
- 2017
- Full Text
- View/download PDF
45. A deep learning framework to estimate elastic modulus from ultrasound measured displacement fields.
- Author
-
Tuladhar, Utsav Ratna, Simon, Richard A., Linte, Cristian A., and Richards, Michael S.
- Published
- 2023
- Full Text
- View/download PDF
46. Developing and evaluating the fidelity of patient specific kidney emulating phantoms for image-guided intervention applications.
- Author
-
Merrell, Kelly, Jackson, Peter, Simon, Richard, and Linte, Cristian
- Published
- 2023
- Full Text
- View/download PDF
47. Investigating the impact of class-dependent label noise in medical image classification.
- Author
-
Khanal, Bidur, Hasan, S.M. Kamrul, Khanal, Bishesh, and Linte, Cristian A.
- Published
- 2023
- Full Text
- View/download PDF
48. Toward Modeling of Radio-frequency Ablation Lesions for Image-guided Left Atrial Fibrillation Therapy: Model Formulation and Preliminary Evaluation
- Author
-
Linte, Cristian A., Camp, Jon J., Holmes, David R., Rettmann, Maryam E., Packer, Douglas L., and Robb, Richard A.
- Subjects
User-Computer Interface ,Treatment Outcome ,Surgery, Computer-Assisted ,Atrial Fibrillation ,Catheter Ablation ,Models, Cardiovascular ,Humans ,Computer Simulation ,Pilot Projects ,Article - Abstract
In the context of image-guided left atrial fibrillation therapy, relatively very little work has been done to consider the changes that occur in the tissue during ablation in order to monitor therapy delivery. Here we describe a technique to predict the lesion progression and monitor the radio-frequency energy delivery via a thermal ablation model that uses heat transfer principles to estimate the tissue temperature distribution and resulting lesion. A preliminary evaluation of the model was conducted in ex vivo skeletal beef muscle tissue while emulating a clinically relevant tissue ablation protocol. The predicted temperature distribution within the tissue was assessed against that measured directly using fiberoptic temperature probes and showed agreement within 5°C between the model-predicted and experimentally measured tissue temperatures at prescribed locations. We believe this technique is capable of providing reasonably accurate representations of the tissue response to radio-frequency energy delivery.
- Published
- 2013
49. Lesion modeling, characterization, and visualization for image-guided cardiac ablation therapy monitoring.
- Author
-
Linte, Cristian A., Camp, Jon J., Rettmann, Maryam E., Haemmerich, Dieter, Aktas, Mehmet K., Huang, David T., Packer, Douglas L., and Holmes III., David R.
- Published
- 2018
- Full Text
- View/download PDF
50. Inpainting surgical occlusion from laparoscopic video sequences for robot-assisted interventions.
- Author
-
Hasan, S. M. Kamrul, Simon, Richard A., and Linte, Cristian A.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.