49 results on '"R Roth"'
Search Results
2. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images
- Author
-
Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R. Roth, and Daguang Xu
- Published
- 2022
- Full Text
- View/download PDF
3. Attention-Guided Pancreatic Duct Segmentation from Abdominal CT Volumes
- Author
-
Chen Shen, Holger R. Roth, Hayashi Yuichiro, Masahiro Oda, Tadaaki Miyamoto, Gen Sato, and Kensaku Mori
- Published
- 2021
- Full Text
- View/download PDF
4. The Power of Proxy Data and Proxy Networks for Hyper-parameter Optimization in Medical Image Segmentation
- Author
-
Daguang Xu, Dong Yang, Andriy Myronenko, Ali Hatamizadeh, Anas A. Abidin, Holger R. Roth, and Vishwesh Nath
- Subjects
Hyperparameter ,Speedup ,business.industry ,Computer science ,Deep learning ,Image segmentation ,computer.software_genre ,Proxy (climate) ,Set (abstract data type) ,Segmentation ,Generalizability theory ,Data mining ,Artificial intelligence ,business ,computer - Abstract
Deep learning models for medical image segmentation are primarily data-driven. Models trained with more data lead to improved performance and generalizability. However, training is a computationally expensive process because multiple hyper-parameters need to be tested to find the optimal setting for best performance. In this work, we focus on accelerating the estimation of hyper-parameters by proposing two novel methodologies: proxy data and proxy networks. Both can be useful for estimating hyper-parameters more efficiently. We test the proposed techniques on CT and MR imaging modalities using well-known public datasets. In both cases using one dataset for building proxy data and another data source for external evaluation. For CT, the approach is tested on spleen segmentation with two datasets. The first dataset is from the medical segmentation decathlon (MSD), where the proxy data is constructed, the secondary dataset is utilized as an external validation dataset. Similarly, for MR, the approach is evaluated on prostate segmentation where the first dataset is from MSD and the second dataset is PROSTATEx. First, we show higher correlation to using full data for training when testing on the external validation set using smaller proxy data than a random selection of the proxy data. Second, we show that a high correlation exists for proxy networks when compared with the full network on validation Dice score. Third, we show that the proposed approach of utilizing a proxy network can speed up an AutoML framework for hyper-parameter search by 3.3\(\times \), and by 4.4\(\times \) if proxy data and proxy network are utilized together.
- Published
- 2021
- Full Text
- View/download PDF
5. Accounting for Dependencies in Deep Learning Based Multiple Instance Learning for Whole Slide Imaging
- Author
-
Dong Yang, Ziyue Xu, Daguang Xu, Andriy Myronenko, and Holger R. Roth
- Subjects
Set (abstract data type) ,Class (computer programming) ,Pixel ,Computer science ,business.industry ,Deep learning ,Key (cryptography) ,Embedding ,Pattern recognition ,Artificial intelligence ,business ,Convolutional neural network ,Transformer (machine learning model) - Abstract
Multiple instance learning (MIL) is a key algorithm for classification of whole slide images (WSI). Histology WSIs can have billions of pixels, which create enormous computational and annotation challenges. Typically, such images are divided into a set of patches (a bag of instances), where only bag-level class labels are provided. Deep learning based MIL methods calculate instance features using convolutional neural network (CNN). Our proposed approach is also deep learning based, with the following two contributions: Firstly, we propose to explicitly account for dependencies between instances during training by embedding self-attention Transformer blocks to capture dependencies between instances. For example, a tumor grade may depend on the presence of several particular patterns at different locations in WSI, which requires to account for dependencies between patches. Secondly, we propose an instance-wise loss function based on instance pseudo-labels. We compare the proposed algorithm to multiple baseline methods, evaluate it on the PANDA challenge dataset, the largest publicly available WSI dataset with over 11K images, and demonstrate state-of-the-art results.
- Published
- 2021
- Full Text
- View/download PDF
6. Multi-task Federated Learning for Heterogeneous Pancreas Segmentation
- Author
-
Chen Shen, Pochuan Wang, Holger R. Roth, Dong Yang, Daguang Xu, Masahiro Oda, Weichung Wang, Chiou-Shann Fuh, Po-Ting Chen, Kao-Lang Liu, Wei-Chih Liao, and Kensaku Mori
- Published
- 2021
- Full Text
- View/download PDF
7. Federated Whole Prostate Segmentation in MRI with Personalized Neural Architectures
- Author
-
Wenqi Li, Dong Yang, Andriy Myronenko, Daguang Xu, Wentao Zhu, Xiaosong Wang, Holger R. Roth, and Ziyue Xu
- Subjects
Scheme (programming language) ,business.industry ,Computer science ,Deep learning ,Machine learning ,computer.software_genre ,Federated learning ,Path (graph theory) ,Train ,Artificial intelligence ,Architecture ,business ,Adaptation (computer science) ,computer ,Prostate segmentation ,computer.programming_language - Abstract
Building robust deep learning-based models requires diverse training data, ideally from several sources. However, these datasets cannot be combined easily because of patient privacy concerns or regulatory hurdles, especially if medical data is involved. Federated learning (FL) is a way to train machine learning models without the need for centralized datasets. Each FL client trains on their local data while only sharing model parameters with a global server that aggregates the parameters from all clients. At the same time, each client’s data can exhibit differences and inconsistencies due to the local variation in the patient population, imaging equipment, and acquisition protocols. Hence, the federated learned models should be able to adapt to the local particularities of a client’s data. In this work, we combine FL with an AutoML technique based on local neural architecture search by training a “supernet”. Furthermore, we propose an adaptation scheme to allow for personalized model architectures at each FL client’s site. The proposed method is evaluated on four different datasets from 3D prostate MRI and shown to improve the local models’ performance after adaptation through selecting an optimal path through the AutoML supernet.
- Published
- 2021
- Full Text
- View/download PDF
8. International Students Need Not Apply: Impact of US Immigration Policy in the Trump Era on International Student Enrollment and Campus Experiences
- Author
-
Kenneth R. Roth and Zachary S. Ritter
- Subjects
Immigration policy ,Political science ,media_common.quotation_subject ,Political economy ,Pandemic ,Institution ,Revenue ,Administration (government) ,media_common ,Nationalism - Abstract
This chapter chronicles the devastating effects the Trump Administration’s immigration policies have had not only on the dwindling number of international students that attend university in the United States, but also on their mental and physical well-being as they navigate increasingly xenophobic and nationalist discourses both on and off campus. The states of Delaware and Kentucky are highlighted as unique examples of policies to maintain and increase international student enrollments, while also revealing the dubious neoliberal underpinnings driving these initiatives. International students generated $45 billion in revenue to the US economy, as of 2018. With stricter immigration policies and the 2020 global pandemic threatening a number of US institution’s very survival, a continued decrease in international student enrollments will potentially have irreversible fiscal and intellectual costs to the American Academy that cannot be overstated.
- Published
- 2020
- Full Text
- View/download PDF
9. 4D CNN for Semantic Segmentation of Cardiac Volumetric Sequences
- Author
-
Dong Yang, Daguang Xu, Holger R. Roth, Alvin Ihsani, Varun Buch, Mark Michalski, Sean Doyle, Andriy Myronenko, and Neil A. Tenenholtz
- Subjects
Computer science ,business.industry ,Volumetric data ,3d segmentation ,Leverage (statistics) ,Segmentation ,Pattern recognition ,Artificial intelligence ,business ,Convolutional neural network - Abstract
We propose a 4D convolutional neural network (CNN) for the segmentation of retrospective ECG-gated cardiac CT, a series of single-channel volumetric data over time. While only a small subset of volumes in the temporal sequence is annotated, we define a sparse loss function on available labels to allow the network to leverage unlabeled images during training and generate a fully segmented sequence. We investigate the accuracy of the proposed 4D network to predict temporally consistent segmentations and compare with traditional 3D segmentation approaches. We demonstrate the feasibility of the 4D CNN and establish its performance on cardiac 4D CCTA (video: https://drive.google.com/uc?id=1n-GJX5nviVs8R7tque2zy2uHFcN_Ogn1.).
- Published
- 2020
- Full Text
- View/download PDF
10. Automated Pancreas Segmentation Using Multi-institutional Collaborative Deep Learning
- Author
-
Weichung Wang, Po-Ting Chen, Chen Shen, Daguang Xu, Pochuan Wang, Kensaku Mori, Dong Yang, Wei-Chih Liao, Kao-Lang Liu, Kazunari Misawa, Holger R. Roth, and Masahiro Oda
- Subjects
Computer science ,business.industry ,Deep learning ,Photography ,Generalizability theory ,Segmentation ,Artificial intelligence ,Raw data ,business ,Data science ,Field (computer science) ,Federated learning - Abstract
The performance of deep learning based methods strongly relies on the number of datasets used for training. Many efforts have been made to increase the data in the medical image analysis field. However, unlike photography images, it is hard to generate centralized databases to collect medical images because of numerous technical, legal, and privacy issues. In this work, we study the use of federated learning between two institutions in a real-world setting to collaboratively train a model without sharing the raw data across national boundaries. We quantitatively compare the segmentation models obtained with federated learning and local training alone. Our experimental results show that federated learning models have higher generalizability than standalone training.
- Published
- 2020
- Full Text
- View/download PDF
11. Cardiac Segmentation of LGE MRI with Noisy Labels
- Author
-
Daguang Xu, Dong Yang, Holger R. Roth, Ziyue Xu, and Wentao Zhu
- Subjects
Ground truth ,medicine.diagnostic_test ,Computer science ,business.industry ,Deep learning ,Magnetic resonance imaging ,Pattern recognition ,Minimal supervision ,medicine ,Segmentation ,cardiovascular diseases ,Artificial intelligence ,business ,Cardiac magnetic resonance - Abstract
In this work, we attempt the segmentation of cardiac structures in late gadolinium-enhanced (LGE) magnetic resonance images (MRI) using only minimal supervision in a two-step approach. In the first step, we register a small set of five LGE cardiac magnetic resonance (CMR) images with ground truth labels to a set of 40 target LGE CMR images without annotation. Each manually annotated ground truth provides labels of the myocardium and the left ventricle (LV) and right ventricle (RV) cavities, which are used as atlases. After multi-atlas label fusion by majority voting, we possess noisy labels for each of the targeted LGE images. A second set of manual labels exists for 30 patients of the target LGE CMR images, but are annotated on different MRI sequences (bSSFP and T2-weighted). Again, we use multi-atlas label fusion with a consistency constraint to further refine our noisy labels if additional annotations in other modalities are available for a given patient. In the second step, we train a deep convolutional network for semantic segmentation on the target data while using data augmentation techniques to avoid over-fitting to the noisy labels. After inference and simple post-processing, we achieve our final segmentation for the targeted LGE CMR images, resulting in an average Dice of 0.890, 0.780, and 0.844 for LV cavity, LV myocardium, and RV cavity, respectively.
- Published
- 2020
- Full Text
- View/download PDF
12. LAMP: Large Deep Nets with Automated Model Parallelism for Image Segmentation
- Author
-
Holger R. Roth, Wenqi Li, Wentao Zhu, Can Zhao, Daguang Xu, and Ziyue Xu
- Subjects
Speedup ,Computer science ,Data parallelism ,business.industry ,Deep learning ,Context (language use) ,Parallel computing ,Image segmentation ,010501 environmental sciences ,01 natural sciences ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Sliding window protocol ,Memory footprint ,Parallelism (grammar) ,Artificial intelligence ,business ,0105 earth and related environmental sciences - Abstract
Deep Learning (DL) models are becoming larger, because the increase in model size might offer significant accuracy gain. To enable the training of large deep networks, data parallelism and model parallelism are two well-known approaches for parallel training. However, data parallelism does not help reduce memory footprint per device. In this work, we introduce Large deep 3D ConvNets with Automated Model Parallelism (LAMP) and investigate the impact of both input’s and deep 3D ConvNets’ size on segmentation accuracy. Through automated model parallelism, it is feasible to train large deep 3D ConvNets with a large input patch, even the whole image. Extensive experiments demonstrate that, facilitated by the automated model parallelism, the segmentation accuracy can be improved through increasing model size and input context size, and large input yields significant inference speedup compared with sliding window of small patches in the inference. Code is available (https://monai.io/research/lamp-automated-model-parallelism).
- Published
- 2020
- Full Text
- View/download PDF
13. Federated Learning for Breast Density Classification: A Real-World Implementation
- Author
-
Vitor Lavor, Varun Buch, Behrooz Hashemian, Jayashree Kalpathy-Cramer, Nir Neumark, Daguang Xu, Prerna Dogra, Miao Zhang, Yan Cheng, Etta D. Pisano, B. Min Yun, Vikash Gupta, Jay B. Patel, Ahmed Harouni, Keith J. Dreyer, Alvin Ihsani, Bryan Chen, Praveer Singh, Richard D. White, Wenqi Li, Colin B. Compas, Sharut Gupta, Thomas J. Schultz, Meesam Shah, Jesse Tetreault, Daniel L. Rubin, Sean Ko, Ken Chang, Laura Coombs, Ram C. Naidu, Evan Leibovitz, Holger R. Roth, Liangqiong Qu, Yuhong Wen, Katharina Hoebel, Ittai Dayan, Bernardo Bizzo, Felipe Kitamura, Matheus Ribeiro Furtado de Mendonça, Mona Flores, Elshaimaa Sharaf, Selnur Erdal, and Adam McCarthy
- Subjects
medicine.diagnostic_test ,Breast imaging ,Computer science ,business.industry ,Deep learning ,BI-RADS ,Machine learning ,computer.software_genre ,Class (biology) ,030218 nuclear medicine & medical imaging ,3. Good health ,Data set ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,medicine ,Mammography ,Generalizability theory ,Artificial intelligence ,business ,computer ,Test data - Abstract
Building robust deep learning-based models requires large quantities of diverse training data. In this study, we investigate the use of federated learning (FL) to build medical imaging classification models in a real-world collaborative setting. Seven clinical institutions from across the world joined this FL effort to train a model for breast density classification based on Breast Imaging, Reporting & Data System (BI-RADS). We show that despite substantial differences among the datasets from all sites (mammography system, class distribution, and data set size) and without centralizing data, we can successfully train AI models in federation. The results show that models trained using FL perform 6.3% on average better than their counterparts trained on an institute’s local data alone. Furthermore, we show a 45.8% relative improvement in the models’ generalizability when evaluated on the other participating sites’ testing data.
- Published
- 2020
- Full Text
- View/download PDF
14. Interactive 3D Segmentation Editing and Refinement via Gated Graph Neural Networks
- Author
-
Ziyue Xu, Holger R. Roth, Xiaosong Wang, Daguang Xu, and Ling Zhang
- Subjects
business.industry ,Graph neural networks ,Computer science ,3d image ,Interactive 3d ,Polygon ,Graph (abstract data type) ,Segmentation ,Pattern recognition ,Artificial intelligence ,Directed graph ,business ,Vertex (geometry) - Abstract
The extraction of organ and lesion regions is an important yet challenging problem in medical image analysis. The accuracy of the segmentation is essential to the quantitative evaluation in many clinical applications. Nevertheless, automated segmentation approaches often suffer from a variety of errors, e.g., over-segmentation, under-detection, and dull edges, which often requires manual corrections on the algorithm-generated results. Therefore, an efficient segmentation editing and refinement tool is desired due to the need of (1) minimizing the repeated effort of human annotators on similar errors (e.g., under-segmentation cross several slices in 3D volumes); (2) an “intelligent” algorithm that can preserve the correct part of the segmentation while it can also align the erroneous part with the true boundary based on users’ limited input. This paper presents a novel solution that utilizes the gated graph neural networks to refine the 3D image volume segmentation from certain automated methods in an interactive mode. The pre-computed segmentation is converted to polygons in a slice-by-slice manner, and then we construct the graph by defining polygon vertices cross slices as nodes in a directed graph. The nodes are modeled with gated recurrent units to first propagate the features among neighboring nodes. Afterward, our framework outputs the movement prediction of each polygon vertex based on the converged states of nodes. We quantitatively demonstrate the refinement performance of our framework on the artificially degraded segmentation data. Up to \(10\%\) improvement in IOUs are achieved for the segmentation with a variety of error degrees and percentages.
- Published
- 2019
- Full Text
- View/download PDF
15. Weakly Supervised Segmentation from Extreme Points
- Author
-
Daguang Xu, Ling Zhang, Fausto Milletari, Holger R. Roth, Xiaosong Wang, Dong Yang, and Ziyue Xu
- Subjects
Random walker algorithm ,Computer science ,business.industry ,Deep learning ,Segmentation ,Pattern recognition ,Artificial intelligence ,Extreme point ,business ,Domain (software engineering) - Abstract
Annotation of medical images has been a major bottleneck for the development of accurate and robust machine learning models. Annotation is costly and time-consuming and typically requires expert knowledge, especially in the medical domain. Here, we propose to use minimal user interaction in the form of extreme point clicks in order to train a segmentation model that can, in turn, be used to speed up the annotation of medical images. We use extreme points in each dimension of a 3D medical image to constrain an initial segmentation based on the random walker algorithm. This segmentation is then used as a weak supervisory signal to train a fully convolutional network that can segment the organ of interest based on the provided user clicks. We show that the network's predictions can be refined through several iterations of training and prediction using the same weakly annotated data. Ultimately, our method has the potential to speed up the generation process of new training datasets for the development of new machine learning and deep learning-based models for, but not exclusively, medical image analysis.
- Published
- 2019
- Full Text
- View/download PDF
16. Unsupervised Segmentation of Micro-CT Images of Lung Cancer Specimen Using Deep Generative Models
- Author
-
Midori Mitarai, Holger R. Roth, Takayasu Moriya, Masahiro Oda, Hirohisa Oda, Kensaku Mori, and Shota Nakamura
- Subjects
Computer science ,business.industry ,Image (category theory) ,05 social sciences ,Supervised learning ,Cancer ,Pattern recognition ,Latent variable ,010501 environmental sciences ,medicine.disease ,computer.software_genre ,01 natural sciences ,Generative model ,Voxel ,0502 economics and business ,medicine ,Segmentation ,Artificial intelligence ,Tomography ,050207 economics ,business ,Lung cancer ,computer ,Categorical variable ,0105 earth and related environmental sciences - Abstract
This paper presents a novel unsupervised segmentation method for the three-dimensional microstructure of lung cancer specimens in micro-computed tomography (micro-CT) images. Micro-CT scanning can nondestructively capture detailed histopathological components of resected lung cancer specimens. However, it is difficult to manually annotate cancer components on micro-CT images. Moreover, since most of the recent segmentation methods using deep neural networks have relied on supervised learning, it is also difficult to cope with unlabeled micro-CT images. In this paper, we propose an unsupervised segmentation method using a deep generative model. Our method consists of two phases. In the first phase, we train our model by iterating two steps: (1) inferring pairs of continuous and categorical latent variables of image patches randomly extracted from an unlabeled image and (2) reconstructing image patches from the inferred pairs of latent variables. In the second phase, our trained model estimates te probabilities of belonging to each category and assigns labels to patches from an entire image in order to obtain the segmented image. We apply our method to seven micro-CT images of resected lung cancer specimens. The original sizes of the micro-CT images were \(1024 \times 1024 \times (544{-}2185)\) voxels, and their resolutions were 25–30 \(\upmu \)m/voxel. Our aim was to automatically divide each image into three regions: invasive carcinoma, noninvasive carcinoma, and normal tissue. From quantitative evaluation, mean normalized mutual information scores of our results are 0.437. From qualitative evaluation, our segmentation results prove helpful for observing the anatomical extent of cancer components. Moreover, we visualize the degree of certainty of segmentation results by using values of categorical latent variables.
- Published
- 2019
- Full Text
- View/download PDF
17. Tunable CT Lung Nodule Synthesis Conditioned on Background Image and Semantic Features
- Author
-
Daguang Xu, Xiaosong Wang, Ziyue Xu, Ling Zhang, Hoo-Chang Shin, Holger R. Roth, Dong Yang, and Fausto Milletari
- Subjects
Discriminator ,Semantic feature ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Inpainting ,Pattern recognition ,Image (mathematics) ,ComputingMethodologies_PATTERNRECOGNITION ,Feature (computer vision) ,Segmentation ,Artificial intelligence ,business ,Block (data storage) - Abstract
Synthetic CT image with artificially generated lung nodules has been shown to be useful as an augmentation method for certain tasks such as lung segmentation and nodule classification. Most conventional methods are designed as “inpainting” tasks by removing a region from background image and synthesizing the foreground nodule. To ensure natural blending with the background, existing method proposed loss function and separate shape/appearance generation. However, spatial discontinuity is still unavoidable for certain cases. Meanwhile, there is often little control over semantic features regarding the nodule characteristics, which may limit their capability of fine-grained augmentation in balancing the original data. In this work, we address these two challenges by developing a 3D multi-conditional generative adversarial network (GAN) that is conditioned on both background image and semantic features for lung nodule synthesis on CT image. Instead of removing part of the input image, we use a fusion block to blend object and background, ensuring more realistic appearance. Multiple discriminator scenarios are considered, and three outputs of image, segmentation, and feature are used to guide the synthesis process towards semantic feature control. We trained our method on public dataset, and showed promising results as a solution for tunable lung nodule synthesis.
- Published
- 2019
- Full Text
- View/download PDF
18. Searching Learning Strategy with Reinforcement Learning for 3D Medical Image Segmentation
- Author
-
Dong Yang, Holger R. Roth, Fausto Milletari, Ziyue Xu, Daguang Xu, and Ling Zhang
- Subjects
Artificial neural network ,Process (engineering) ,Computer science ,business.industry ,02 engineering and technology ,Image segmentation ,Machine learning ,computer.software_genre ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business ,computer - Abstract
Deep neural network (DNN) based approaches have been widely investigated and deployed in medical image analysis. For example, fully convolutional neural networks (FCN) achieve the state-of-the-art performance in several applications of 2D/3D medical image segmentation. Even the baseline neural network models (U-Net, V-Net, etc.) have been proven to be very effective and efficient when the training process is set up properly. Nevertheless, to fully exploit the potentials of neural networks, we propose an automated searching approach for the optimal training strategy with reinforcement learning. The proposed approach can be utilized for tuning hyper-parameters, and selecting necessary data augmentation with certain probabilities. The proposed approach is validated on several tasks of 3D medical image segmentation. The performance of the baseline model is boosted after searching, and it can achieve comparable accuracy to other manually-tuned state-of-the-art segmentation approaches.
- Published
- 2019
- Full Text
- View/download PDF
19. End-to-End Adversarial Shape Learning for Abdomen Organ Deep Segmentation
- Author
-
Jinzheng Cai, Dong Yang, Holger R. Roth, Daguang Xu, Lin Yang, and Yingda Xia
- Subjects
End-to-end principle ,business.industry ,Computer science ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Medical imaging ,Segmentation ,Computer vision ,Artificial intelligence ,business ,Convolutional neural network - Abstract
Automatic segmentation of abdomen organs using medical imaging has many potential applications in clinical workflows. Recently, the state-of-the-art performance for organ segmentation has been achieved by deep learning models, i.e., convolutional neural network (CNN). However, it is challenging to train the conventional CNN-based segmentation models that aware of the shape and topology of organs. In this work, we tackle this problem by introducing a novel end-to-end shape learning architecture -- organ point-network. It takes deep learning features as inputs and generates organ shape representations as points that located on organ surface. We later present a novel adversarial shape learning objective function to optimize the point-network to capture shape information better. We train the point-network together with a CNN-based segmentation model in a multi-task fashion so that the shared network parameters can benefit from both shape learning and segmentation tasks. We demonstrate our method with three challenging abdomen organs including liver, spleen, and pancreas. The point-network generates surface points with fine-grained details and it is found critical for improving organ segmentation. Consequently, the deep segmentation model is improved by the introduced shape learning as significantly better Dice scores are observed for spleen and pancreas segmentation.
- Published
- 2019
- Full Text
- View/download PDF
20. Introduction: A Special Note on the Heightened Effects of Urban Marginality in the Trump Era
- Author
-
Kenneth R. Roth
- Subjects
Siege ,Civil rights ,media_common.quotation_subject ,Political science ,Political economy ,Democracy ,media_common ,Creed ,Odds ,Diversity (politics) - Abstract
Democracy, diversity, and what most might consider decency, all may be under siege in the Americas, and much of that attack is emanating from the US in the Trump era. With the increasing disregard of human and civil rights, inside the US, at its borders, and in Puerto Rico, it is difficult and possibly unprecedented to imagine a time other than Antebellum slavery when the US has been so at odds with its creed. However, these growing challenges are not confined to the US. Marginality of nondominant groups is on the rise throughout the hemisphere, with various and few positive outcomes. This chapter posits a clear and present concern, and outlines how the authors in ensuing chapters interrogate this moment in the Americas.
- Published
- 2018
- Full Text
- View/download PDF
21. Colon Shape Estimation Method for Colonoscope Tracking Using Recurrent Neural Networks
- Author
-
Kensaku Mori, Kazuhiro Furukawa, Nassir Navab, Ryoji Miyahara, Takayuki Kitasaka, Masahiro Oda, Yoshiki Hirooka, Hidemi Goto, and Holger R. Roth
- Subjects
Computer science ,business.industry ,Perforation (oil well) ,Tracking (particle physics) ,digestive system diseases ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Recurrent neural network ,030228 respiratory system ,Computer vision ,Artificial intelligence ,business - Abstract
We propose an estimation method using a recurrent neural network (RNN) of the colon’s shape where deformation was occurred by a colonoscope insertion. Colonoscope tracking or a navigation system that navigates physician to polyp positions is needed to reduce such complications as colon perforation. Previous tracking methods caused large tracking errors at the transverse and sigmoid colons because these areas largely deform during colonoscope insertion. Colon deformation should be taken into account in tracking processes. We propose a colon deformation estimation method using RNN and obtain the colonoscope shape from electromagnetic sensors during its insertion into the colon. This method obtains positional, directional, and an insertion length from the colonoscope shape. From its shape, we also calculate the relative features that represent the positional and directional relationships between two points on a colonoscope. Long short-term memory is used to estimate the current colon shape from the past transition of the features of the colonoscope shape. We performed colon shape estimation in a phantom study and correctly estimated the colon shapes during colonoscope insertion with 12.39 (mm) estimation error.
- Published
- 2018
- Full Text
- View/download PDF
22. Boston Naming Test
- Author
-
Carole R. Roth and Nancy Helm-Estabrooks
- Published
- 2018
- Full Text
- View/download PDF
23. Towards Automated Colonoscopy Diagnosis: Binary Polyp Size Estimation via Unsupervised Depth Learning
- Author
-
Holger R. Roth, Le Lu, Masahiro Oda, Yuichi Mori, Masashi Misawa, Hayato Itoh, Shin-ei Kudo, and Kensaku Mori
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,Deep learning ,Feature extraction ,Colonoscopy ,Pattern recognition ,02 engineering and technology ,Colon cancer screening ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Dimension (vector space) ,Depth map ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,RGB color model ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Spatial analysis - Abstract
In colon cancer screening, polyp size estimation using only colonoscopy images or videos is difficult even for expert physicians although the size information of polyps is important for diagnosis. Towards the fully automated computer-aided diagnosis (CAD) pipeline, a robust and precise polyp size estimation method is highly desired. However, the size estimation problem of a three-dimensional object from a two-dimensional image is ill-posed due to the lack of three-dimensional spatial information. To circumvent this challenge, we formulate a relaxed form of size estimation as a binary classification problem and solve it by a new deep neural network architecture: BseNet. This relaxed form of size estimation is defined as a two-category classification: under and over a certain polyp dimension criterion that would provoke different clinical treatments (resecting the polyp or not). BseNet estimates the depth map image from an input colonoscopic RGB image using unsupervised deep learning, and integrates RGB with the computed depth information to produce a four-channel RGB-D imagery data, that is subsequently encoded by BseNet to extract deep RGB-D image features and facilitate the size classification into two categories: under and over 10 mm polyps. For the evaluation of BseNet, a large dataset of colonoscopic videos of totally over 16 h is constructed. We evaluate the accuracies of both binary polyp size estimation and polyp detection performance since detection is a prerequisite step of a fully automated CAD system. The experimental results show that our proposed BseNet achieves 79.2 % accuracy for binary polyp-size classification. We also combine the image feature extraction by BseNet and classification of short video clips using a long short-term memory (LSTM) network. Polyp detection (if the video clip contains a polyp or not) shows 88.8 % sensitivity when employing the spatio-temporal image feature extraction and classification.
- Published
- 2018
- Full Text
- View/download PDF
24. BESNet: Boundary-Enhanced Segmentation of Cells in Histopathological Images
- Author
-
Jure Sokolic, Kensaku Mori, Hirohisa Oda, Masahiro Oda, Hiroo Uchida, Akinari Hinoki, Julia A. Schnabel, Takayuki Kitasaka, Kosuke Chiba, and Holger R. Roth
- Subjects
Computer science ,business.industry ,Deep learning ,010401 analytical chemistry ,Boundary (topology) ,Pattern recognition ,01 natural sciences ,030218 nuclear medicine & medical imaging ,0104 chemical sciences ,03 medical and health sciences ,0302 clinical medicine ,Feature (computer vision) ,Path (graph theory) ,Segmentation ,Artificial intelligence ,business ,Image resolution - Abstract
We propose a novel deep learning method called Boundary-Enhanced Segmentation Network (BESNet) for the detection and semantic segmentation of cells on histopathological images. The semantic segmentation of small regions using fully convolutional networks typically suffers from inaccuracies around the boundaries of small structures, like cells, because the probabilities often become blurred. In this work, we propose a new network structure that encodes input images to feature maps similar to U-net but utilizes two decoding paths that restore the original image resolution. One decoding path enhances the boundaries of cells, which can be used to improve the quality of the entire cell segmentation achieved in the other decoding path. We explore two strategies for enhancing the boundaries of cells: (1) skip connections of feature maps, and (2) adaptive weighting of loss functions. In (1), the feature maps from the boundary decoding path are concatenated with the decoding path for entire cell segmentation. In (2), an adaptive weighting of the loss for entire cell segmentation is performed when boundaries are not enhanced strongly, because detecting such parts is difficult. The detection rate of ganglion cells was 80.0% with 1.0 false positives per histopathology slice. The mean Dice index representing segmentation accuracy was 74.0%. BESNet produced a similar detection performance and higher segmentation accuracy than comparable U-net architectures without our modifications.
- Published
- 2018
- Full Text
- View/download PDF
25. Anarthria
- Author
-
Ignatius Nip and Carole R. Roth
- Published
- 2018
- Full Text
- View/download PDF
26. Dysarthria
- Author
-
Carole R. Roth and Ignatius Nip
- Published
- 2018
- Full Text
- View/download PDF
27. Fully Convolutional Network-Based Eyeball Segmentation from Sparse Annotation for Eye Surgery Simulation Model
- Author
-
Masahiro Oda, Takaaki Sugino, Holger R. Roth, and Kensaku Mori
- Subjects
Similarity (geometry) ,Computer science ,business.industry ,Pooling ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Construct (python library) ,Residual ,Image (mathematics) ,Segmentation ,Computer vision ,Artificial intelligence ,Macro ,business ,Volume (compression) - Abstract
This paper presents a fully convolutional network-based segmentation method to create an eyeball model data for patient-specific ophthalmologic surgery simulation. In order to create an elaborate eyeball model for each patient, we need to accurately segment eye structures with different sizes and complex shapes from high-resolution images. Therefore, we aim to construct a fully convolutional network to enable accurate segmentation of anatomical structures in an eyeball from training on sparsely-annotated images, which can provide a user with all annotated slices if he or she annotates a few slices in each image volume data. In this study, we utilize a fully convolutional network with full-resolution residual units that effectively learns multi-scale image features for segmentation of eye macro- and microstructures by acting as a bridge between the two processing streams (residual and pooling streams). In addition, a weighted loss function and data augmentation are utilized for network training to accurately perform the semantic segmentation from only sparsely-annotated axial images. From the results of segmentation experiments using micro-CT images of pig eyeballs, we found that the proposed network provided better segmentation performance than conventional networks and achieved mean Dice similarity coefficient scores of 91.5% for segmentation of eye structures even from a small amount of training data.
- Published
- 2018
- Full Text
- View/download PDF
28. A Multi-scale Pyramid of 3D Fully Convolutional Networks for Abdominal Multi-organ Segmentation
- Author
-
Masahiro Oda, Takaaki Sugino, Hirohisa Oda, Kensaku Mori, Yuichiro Hayashi, Chen Shen, Holger R. Roth, and Kazunari Misawa
- Subjects
Computer science ,business.industry ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Multi organ ,030218 nuclear medicine & medical imaging ,Upsampling ,03 medical and health sciences ,0302 clinical medicine ,Robustness (computer science) ,030220 oncology & carcinogenesis ,Pyramid ,Segmentation ,Artificial intelligence ,business - Abstract
Recent advances in deep learning, like 3D fully convolutional networks (FCNs), have improved the state-of-the-art in dense semantic segmentation of medical images. However, most network architectures require severely downsampling or cropping the images to meet the memory limitations of today’s GPU cards while still considering enough context in the images for accurate segmentation. In this work, we propose a novel approach that utilizes auto-context to perform semantic segmentation at higher resolutions in a multi-scale pyramid of stacked 3D FCNs. We train and validate our models on a dataset of manually annotated abdominal organs and vessels from 377 clinical CT images used in gastric surgery, and achieve promising results with close to 90% Dice score on average. For additional evaluation, we perform separate testing on datasets from different sources and achieve competitive results, illustrating the robustness of the model and approach.
- Published
- 2018
- Full Text
- View/download PDF
29. Tracking and Segmentation of the Airways in Chest CT Using a Fully Convolutional Network
- Author
-
Junji Ueno, Kensaku Mori, Holger R. Roth, Takayuki Kitasaka, Qier Meng, and Masahiro Oda
- Subjects
medicine.medical_specialty ,Computer science ,business.industry ,Deep learning ,Pattern recognition ,respiratory system ,Tracking (particle physics) ,respiratory tract diseases ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,Segmentation ,Radiology ,Artificial intelligence ,business ,Encoder ,030217 neurology & neurosurgery - Abstract
Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes such as lung cancer detection, chronic obstructive pulmonary disease (COPD), and surgical navigation. However, due to the complex tree-like structure of the airways, obtaining segmentation results with high accuracy for a complete 3D airway extraction remains a challenging task. In recent years, deep learning based methods, especially fully convolutional networks (FCN), have improved the state-of-the-art in many segmentation tasks. 3D U-Net is an example that optimized for 3D biomedical imaging. It consists of a contracting encoder part to analyze the input volume and a successive decoder part to generate integrated 3D segmentation results. While 3D U-Net can be trained for any 3D segmentation task, its direct application to airway segmentation is challenging due to differently sized airway branches. In this work, we combine 3D deep learning with image-based tracking in order to automatically extract the airways. Our method is driven by adaptive cuboidal volume of interest (VOI) analysis using a 3D U-Net model. We track the airways along their centerlines and set VOIs according to the diameter and running direction of each airway. After setting a VOI, the 3D U-Net is utilized to extract the airway region inside the VOI. All extracted candidate airway regions are unified to form an integrated airway tree. We trained on 30 cases and tested our method on an additional 20 cases. Compared with other state-of-the-art airway tracking and segmentation methods, our method can increase the detection rate by 5.6 while decreasing the false positives (FP) by 0.7 percentage points.
- Published
- 2017
- Full Text
- View/download PDF
30. Efficient False Positive Reduction in Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation
- Author
-
Kevin M. Cherry, Ari Seff, Holger R. Roth, Lauren Kim, Le Lu, Jiamin Liu, Ronald M. Summers, and Jianhua Yao
- Subjects
Computer science ,business.industry ,Pattern recognition ,Patient data ,Machine learning ,computer.software_genre ,01 natural sciences ,Convolutional neural network ,Computer aided detection ,030218 nuclear medicine & medical imaging ,010309 optics ,Data set ,03 medical and health sciences ,0302 clinical medicine ,0103 physical sciences ,False positive paradox ,Medical imaging ,Test phase ,Artificial intelligence ,business ,computer ,Classifier (UML) - Abstract
In clinical practice and medical imaging research , automated computer-aided detection (CADe) is an important tool. While many methods can achieve high sensitivities, they typically suffer from high false positives (FP) per patient. In this study, we describe a two-stage coarse-to-fine approach using CADe candidate generation systems that operate at high sensitivity rates (close to \(100\%\) recall). In a second stage, we reduce false positive numbers using state-of-the-art machine learning methods, namely deep convolutional neural networks (ConvNet). The ConvNets are trained to differentiate hard false positives from true-positives utilizing a set of 2D (two-dimensional) or 2.5D re-sampled views comprising random translations, rotations, and multi-scale observations around a candidate’s center coordinate. During the test phase, we apply the ConvNets on unseen patient data and aggregate all probability scores for lesions (or pathology). We found that this second stage is a highly selective classifier that is able to reject difficult false positives while retaining good sensitivity rates. The method was evaluated on three data sets (sclerotic metastases, lymph nodes, colonic polyps) with varying numbers patients (59, 176, and 1,186, respectively). Experiments show that the method is able to generalize to different applications and increasing data set sizes. Marked improvements are observed in all cases: sensitivities increased from 57 to 70%, from 43 to 77% and from 58 to 75% for sclerotic metastases, lymph nodes and colonic polyps, respectively, at low FP rates per patient (3 FPs/patient).
- Published
- 2017
- Full Text
- View/download PDF
31. Automatic Pancreas Segmentation Using Coarse-to-Fine Superpixel Labeling
- Author
-
Holger R. Roth, Amal Farag, Jiamin Liu, Evrim B. Turkbey, Le Lu, and Ronald M. Summers
- Subjects
Jaccard index ,business.industry ,Computer science ,Orientation (computer vision) ,Image (category theory) ,Pattern recognition ,02 engineering and technology ,Cross-validation ,030218 nuclear medicine & medical imaging ,Random forest ,03 medical and health sciences ,0302 clinical medicine ,Sørensen–Dice coefficient ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business - Abstract
Accurate automatic detection and segmentation of abdominal organs from CT images is important for quantitative and qualitative organ tissue analysis, detection of pathologies, surgical assistance as well as computer-aided diagnosis (CAD). In general, the large variability of organ locations, the spatial interaction between organs that appear similar in medical scans and orientation and size variations are among the major challenges of organ segmentation. The pancreas poses these challenges in addition to its flexibility which allows for the shape of the tissue to vastly change. In this chapter, we present a fully automated bottom-up approach for pancreas segmentation in abdominal computed tomography (CT) scans. The method is a four-stage system based on a hierarchical cascade of information propagation by classifying image patches at different resolutions and cascading (segments) superpixels . System components consist of the following: (1) decomposing CT slice images as a set of disjoint boundary-preserving superpixels; (2) computing pancreas class probability maps via dense patch labeling; (3) classifying superpixels by pooling both intensity and probability features to form empirical statistics in cascaded random forest frameworks; and (4) simple connectivity based post-processing. Evaluation of the approach is conducted on a database of 80 manually segmented CT volumes in sixfold cross validation. Our achieved results are comparable, or better to the state-of-the-art methods (evaluated by “leave-one-patient-out”), with a Dice coefficient of \(70.7\%\) and Jaccard Index of \(57.9\%\). The computational efficiency of the proposed approach is drastically improved in the order of 6–8 min, compared to other methods of \({\ge }10\) hours per testing case.
- Published
- 2017
- Full Text
- View/download PDF
32. 3D FCN Feature Driven Regression Forest-Based Pancreas Localization and Segmentation
- Author
-
Daniel Rueckert, Takayuki Kitasaka, Ken’ichi Karasawa, Kensaku Mori, Natsuki Shimizu, Holger R. Roth, Michitaka Fujiwara, Masahiro Oda, and Kazunari Misawa
- Subjects
Jaccard index ,business.industry ,Computer science ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,medicine.anatomical_structure ,Fully automated ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,Probabilistic atlas ,business ,Pancreas ,Regression forest - Abstract
This paper presents a fully automated atlas-based pancreas segmentation method from CT volumes utilizing 3D fully convolutional network (FCN) feature-based pancreas localization. Segmentation of the pancreas is difficult because it has larger inter-patient spatial variations than other organs. Previous pancreas segmentation methods failed to deal with such variations. We propose a fully automated pancreas segmentation method that contains novel localization and segmentation. Since the pancreas neighbors many other organs, its position and size are strongly related to the positions of the surrounding organs. We estimate the position and the size of the pancreas (localization) from global features by regression forests. As global features, we use intensity differences and 3D FCN deep learned features, which include automatically extracted essential features for segmentation. We chose 3D FCN features from a trained 3D U-Net, which is trained to perform multi-organ segmentation. The global features include both the pancreas and surrounding organ information. After localization, a patient-specific probabilistic atlas-based pancreas segmentation is performed. In evaluation results with 146 CT volumes, we achieved 60.6% of the Jaccard index and 73.9% of the Dice overlap.
- Published
- 2017
- Full Text
- View/download PDF
33. TBS: Tensor-Based Supervoxels for Unfolding the Heart
- Author
-
Julia A. Schnabel, Takayuki Kitasaka, Kanwal K. Bhatia, Holger R. Roth, Toshiaki Akita, Kensaku Mori, Masahiro Oda, and Hirohisa Oda
- Subjects
Physics ,Cardiac anatomy ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,Canine heart ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,Nuclear magnetic resonance ,Ventricle ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Tensor - Abstract
Investigation of the myofiber structure of the heart is desired for studies of anatomy and diseases. However, it is difficult to understand the left ventricle structure intuitively because it consists of three layers with different myofiber orientations. In this work, we propose an unfolding method for micro-focus X-ray CT (\(\mu \)CT) volumes of the heart. First, we explore a novel supervoxel over-segmentation technique, Tensor-Based Supervoxels (TBS), which allows us to divide the left ventricle into three layers. We utilize TBS and B-spline curves for extraction of the layers. Finally we project \(\mu \)CT intensities in each layer to an unfolded view. Experiments are performed using three \(\mu \)CT images of the left ventricle acquired from canine heart specimens. In all cases, the myofiber structure could be observed clearly in the unfolded views. This is promising for helping cardiac studies.
- Published
- 2017
- Full Text
- View/download PDF
34. Micro-CT Guided 3D Reconstruction of Histological Images
- Author
-
Hirohisa Oda, Kensaku Mori, Masahiro Oda, Kai Nagara, Shota Nakamura, Takayasu Moriya, and Holger R. Roth
- Subjects
0301 basic medicine ,2d images ,Computer science ,3D reconstruction ,02 engineering and technology ,021001 nanoscience & nanotechnology ,03 medical and health sciences ,030104 developmental biology ,Tissue specimen ,Microscopy ,0210 nano-technology ,Micro ct ,Feature matching ,Biomedical engineering - Abstract
Histological images are very important for diagnosis of cancer and other diseases. However, during the preparation of the histological slides for microscopy, the 3D information of the tissue specimen gets lost. Therefore, many 3D reconstruction methods for histological images have been proposed. However, most approaches rely on the histological 2D images alone, which makes 3D reconstruction difficult due to the large deformations introduced by cutting and preparing the histological slides. In this work, we propose an image-guided approach to 3D reconstruction of histological images. Before histological preparation of the slides, the specimen is imaged using X-ray microtomography (micro CT). We can then align each histological image back to the micro CT image utilizing non-rigid registration. Our registration results show that our method can provide smooth 3D reconstructions with micro CT guidance.
- Published
- 2017
- Full Text
- View/download PDF
35. Dysarthria
- Author
-
Carole R. Roth and Ignatius Nip
- Published
- 2017
- Full Text
- View/download PDF
36. Anarthria
- Author
-
Ignatius Nip and Carole R. Roth
- Published
- 2017
- Full Text
- View/download PDF
37. Three Aspects on Using Convolutional Neural Networks for Computer-Aided Detection in Medical Imaging
- Author
-
Jianhua Yao, Le Lu, Mingchen Gao, Daniel J. Mollura, Holger R. Roth, Ziyue Xu, Ronald M. Summers, Isabella Nogues, and Hoo-Chang Shin
- Subjects
Contextual image classification ,Computer science ,business.industry ,Context (language use) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,Computer aided detection ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,020201 artificial intelligence & image processing ,Artificial intelligence ,Transfer of learning ,business ,computer - Abstract
Deep convolutional neural networks (CNNs) enable learning trainable, highly representative and hierarchical image feature from sufficient training data which makes rapid progress in computer vision possible. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pretrained CNN features, and transfer learning , i.e., fine-tuning CNN models pretrained from natural image dataset (such as large-scale annotated natural image database: ImageNet) to medical image tasks. In this chapter, we exploit three important factors of employing deep convolutional neural networks to computer-aided detection problems. First, we exploit and evaluate several different CNN architectures including from shallower to deeper CNNs: classical CifarNet, to recent AlexNet and state-of-the-art GoogLeNet and their variants. The studied models contain five thousand to 160 million parameters and vary in the numbers of layers. Second, we explore the influence of dataset scales and spatial image context configurations on medical image classification performance. Third, when and why transfer learning from the pretrained ImageNet CNN models (via fine-tuning) can be useful for medical imaging tasks are carefully examined. We study two specific computer-aided detection (CADe) problems, namely thoracoabdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection and report the first fivefold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive quantitative evaluation, CNN model analysis, and empirical insights can be helpful to the design of high-performance CAD systems for other medical imaging tasks, without loss of generality.
- Published
- 2017
- Full Text
- View/download PDF
38. Spatial Aggregation of Holistically-Nested Networks for Automated Pancreas Segmentation
- Author
-
Amal Farag, Le Lu, Holger R. Roth, Andrew Sohn, and Ronald M. Summers
- Subjects
Computer science ,business.industry ,Boundary (topology) ,Scale-space segmentation ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,Random forest ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,Similarity (network science) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Spatial aggregation ,020201 artificial intelligence & image processing ,Segmentation ,Computer vision ,Artificial intelligence ,Pancreas ,business - Abstract
Accurate automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits traditional segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a holistic learning approach that integrates semantic mid-level cues of deeply-learned organ interior and boundary maps via robust spatial aggregation using random forest. Our method generates boundary preserving pixel-wise class labels for pancreas segmentation. Quantitative evaluation is performed on CT scans of 82 patients in 4-fold cross-validation. We achieve a (mean ± std. dev.) Dice Similarity Coefficient of 78.01 %±8.2 % in testing which significantly outperforms the previous state-of-the-art approach of 71.8 %±10.7 % under the same evaluation criterion.
- Published
- 2016
- Full Text
- View/download PDF
39. Automatic Lymph Node Cluster Segmentation Using Holistically-Nested Neural Networks and Structured Optimization in CT Images
- Author
-
Gedas Bertasius, Jianbo Shi, Yohannes Tsehay, Xiaosong Wang, Nathan Lay, Ronald M. Summers, Isabella Nogues, Le Lu, and Holger R. Roth
- Subjects
Conditional random field ,Artificial neural network ,Computer science ,business.industry ,Pattern recognition ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,Cut ,medicine ,Computer vision ,Body region ,Segmentation ,Artificial intelligence ,business ,Lymph node ,030217 neurology & neurosurgery - Abstract
Lymph node segmentation is an important yet challenging problem in medical image analysis. The presence of enlarged lymph nodes (LNs) signals the onset or progression of a malignant disease or infection. In the thoracoabdominal (TA) body region, neighboring enlarged LNs often spatially collapse into “swollen” lymph node clusters (LNCs) (up to 9 LNs in our dataset). Accurate segmentation of TA LNCs is complexified by the noticeably poor intensity and texture contrast among neighboring LNs and surrounding tissues, and has not been addressed in previous work. This paper presents a novel approach to TA LNC segmentation that combines holistically-nested neural networks (HNNs) and structured optimization (SO). Two HNNs, built upon recent fully convolutional networks (FCNs) and deeply supervised networks (DSNs), are trained to learn the LNC appearance (HNN-A) or contour (HNN-C) probabilistic output maps, respectively. HNN first produces the class label maps with the same resolution as the input image, like FCN. Afterwards, HNN predictions for LNC appearance and contour cues are formulated into the unary and pairwise terms of conditional random fields (CRFs), which are subsequently solved using one of three different SO methods: dense CRF, graph cuts, and boundary neural fields (BNF). BNF yields the highest quantitative results. Its mean Dice coefficient between segmented and ground truth LN volumes is 82.1 % ± 9.6 %, compared to 73.0 % ± 17.6 % for HNN-A alone. The LNC relative volume (\(cm^3\)) difference is 13.7 % ± 13.1 %, a promising result for the development of LN imaging biomarkers based on volumetric measurements.
- Published
- 2016
- Full Text
- View/download PDF
40. Multi-atlas Segmentation with Joint Label Fusion of Osteoporotic Vertebral Compression Fractures on CT
- Author
-
Jianhua Yao, Ronald M. Summers, Holger R. Roth, Joseph E. Burns, and Yinong Wang
- Subjects
musculoskeletal diseases ,medicine.medical_specialty ,business.industry ,Vertebral compression fracture ,Lumbar vertebrae ,medicine.disease ,Compression (physics) ,Accurate segmentation ,medicine.anatomical_structure ,Sørensen–Dice coefficient ,Orthopedic surgery ,medicine ,Multi atlas segmentation ,business ,Nuclear medicine ,Vertebral column - Abstract
The precise and accurate segmentation of the vertebral column is essential in the diagnosis and treatment of various orthopedic, neurological, and oncological traumas and pathologies. Segmentation is especially challenging in the presence of pathology such as vertebral compression fractures. In this paper, we propose a method to produce segmentations for osteoporotic compression fractured vertebrae by applying a multi-atlas joint label fusion technique for clinical computed tomography (CT) images. A total of 170 thoracic and lumbar vertebrae were evaluated using atlases from five patients with varying degrees of spinal degeneration. In an osteoporotic cohort of bundled atlases, registration provided an average Dice coefficient and mean absolute surface distance of \(92.7\,{\pm }\,4.5\)% and \(0.32\,{\pm }\,0.13\) mm for osteoporotic vertebrae, respectively, and \(90.9\,{\pm }\,3.0\,\%\) and \(0.36\,{\pm }\,0.11\) mm for compression fractured vertebrae.
- Published
- 2016
- Full Text
- View/download PDF
41. DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation
- Author
-
Holger R. Roth, Evrim B. Turkbey, Le Lu, Jiamin Liu, Hoo-Chang Shin, Amal Farag, and Ronald M. Summers
- Subjects
Conditional random field ,business.industry ,Computer science ,Gaussian blur ,Scale-space segmentation ,Context (language use) ,Pattern recognition ,Convolutional neural network ,k-nearest neighbors algorithm ,Random forest ,symbols.namesake ,symbols ,Segmentation ,Computer vision ,Artificial intelligence ,business - Abstract
Automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits previous segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a probabilistic bottom-up approach for pancreas segmentation in abdominal computed tomography CT scans, using multi-level deep convolutional networks ConvNets. We propose and evaluate several variations of deep ConvNets in the context of hierarchical, coarse-to-fine classification on image patches and regions, i.e. superpixels. We first present a dense labeling of local image patches via P-ConvNet and nearest neighbor fusion. Then we describe a regional ConvNet R1-ConvNet that samples a set of bounding boxes around each image superpixel at different scales of contexts in a "zoom-out" fashion. Our ConvNets learn to assign class probabilities for each superpixel region of being pancreas. Last, we study a stacked R2-ConvNet leveraging the joint space of CT intensities and the P-ConvNet dense probability maps. Both 3D Gaussian smoothing and 2D conditional random fields are exploited as structured predictions for post-processing. We evaluate on CT images of [InlineEquation not available: see fulltext.] patients in 4-fold cross-validation. We achieve a Dice Similarity Coefficient of 83.6±6.3% in training and 71.8±10.7% in testing.
- Published
- 2015
- Full Text
- View/download PDF
42. Detection of Sclerotic Spine Metastases via Random Aggregation of Deep Convolutional Neural Network Classifications
- Author
-
Jianhua Yao, Ronald M. Summers, Holger R. Roth, Le Lu, James Stieger, and Joseph E. Burns
- Subjects
Lesion ,medicine.diagnostic_test ,Computer science ,Bone lesion ,medicine ,Computed tomography ,medicine.symptom ,Convolutional neural network ,Algorithm - Abstract
Automated detection of sclerotic metastases (bone lesions) in Computed Tomography (CT) images has potential to be an important tool in clinical practice and research. State-of-the-art methods show performance of 79 % sensitivity or true-positive (TP) rate, at 10 false-positives (FP) per volume. We design a two-tiered coarse-to-fine cascade framework to first operate a highly sensitive candidate generation system at a maximum sensitivity of \(\sim \)92 % but with high FP level (\(\sim \)50 per patient). Regions of interest (ROI) for lesion candidates are generated in this step and function as input for the second tier. In the second tier we generate \(N\) 2D views, via scale, random translations, and rotations with respect to each ROI centroid coordinates. These random views are used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign individual probabilities for a new set of \(N\) random views that are averaged at each ROI to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. We validate the approach on CT images of 59 patients (49 with sclerotic metastases and 10 normal controls). The proposed method reduces the number of FP/vol. from 4 to 1.2, 7 to 3, and 12 to 9.5 when comparing a sensitivity rates of 60, 70, and 80 % respectively in testing. The Area-Under-the-Curve (AUC) is 0.834. The results show marked improvement upon previous work.
- Published
- 2015
- Full Text
- View/download PDF
43. Leveraging Mid-Level Semantic Boundary Cues for Automated Lymph Node Detection
- Author
-
Ari Seff, Le Lu, Holger R. Roth, Ronald M. Summers, Adrian Barbu, and Hoo-Chang Shin
- Subjects
business.industry ,Feature (computer vision) ,Computer science ,Image map ,Histogram ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Robust statistics ,Boundary (topology) ,Pattern recognition ,Artificial intelligence ,business ,Convolutional neural network ,Random forest - Abstract
Histograms of oriented gradients (HOG) are widely employed image descriptors in modern computer-aided diagnosis systems. Built upon a set of local, robust statistics of low-level image gradients, HOG features are usually computed on raw intensity images. In this paper, we explore a learned image transformation scheme for producing higher-level inputs to HOG. Leveraging semantic object boundary cues, our methods compute data-driven image feature maps via a supervised boundary detector. Compared with the raw image map, boundary cues offer mid-level, more object-specific visual responses that can be suited for subsequent HOG encoding. We validate integrations of several image transformation maps with an application of computer-aided detection of lymph nodes on thoracoabdominal CT images. Our experiments demonstrate that semantic boundary cues based HOG descriptors complement and enrich the raw intensity alone. We observe an overall system with substantially improved results (~78% versus 60% recall at 3 FP/volume for two target regions). The proposed system also moderately outperforms the state-of-the-art deep convolutional neural network (CNN) system in the mediastinum region, without relying on data augmentation and requiring significantly fewer training samples.
- Published
- 2015
- Full Text
- View/download PDF
44. 2D View Aggregation for Lymph Node Detection Using a Shallow Hierarchy of Linear Classifiers
- Author
-
Evrim B. Turkbey, Ari Seff, Kevin M. Cherry, Ronald M. Summers, Holger R. Roth, Le Lu, Joanne Hoffman, Shijun Wang, and Jiamin Liu
- Subjects
Linear classifier ,Machine learning ,computer.software_genre ,Sensitivity and Specificity ,Article ,Pattern Recognition, Automated ,Imaging, Three-Dimensional ,Artificial Intelligence ,False positive paradox ,Humans ,Computer Simulation ,Mathematics ,business.industry ,Linear model ,Reproducibility of Results ,Pattern recognition ,Object detection ,Random forest ,Radiographic Image Enhancement ,Histogram of oriented gradients ,Feature (computer vision) ,Lymphatic Metastasis ,Pattern recognition (psychology) ,Linear Models ,Radiographic Image Interpretation, Computer-Assisted ,Lymph Nodes ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,computer ,Algorithms - Abstract
Enlarged lymph nodes (LNs) can provide important information for cancer diagnosis, staging, and measuring treatment reactions, making automated detection a highly sought goal. In this paper, we propose a new algorithm representation of decomposing the LN detection problem into a set of 2D object detection subtasks on sampled CT slices, largely alleviating the curse of dimensionality issue. Our 2D detection can be effectively formulated as linear classification on a single image feature type of Histogram of Oriented Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We exploit both simple pooling and sparse linear fusion schemes to aggregate these 2D detection scores for the final 3D LN detection. In this manner, detection is more tractable and does not need to perform perfectly at instance level (as weak hypotheses) since our aggregation process will robustly harness collective information for LN detection. Two datasets (90 patients with 389 mediastinal LNs and 86 patients with 595 abdominal LNs) are used for validation. Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume (FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10 FP/vol.), for the mediastinal and abdominal datasets respectively. Our results compare favorably to previous state-of-the-art methods.
- Published
- 2014
- Full Text
- View/download PDF
45. Reliability and validity of the German version of the University of Jyvaskyla Active Aging Scale (UJACAS-G).
- Author
-
Hinrichs T, Rantanen T, Portegijs E, Nebiker L, Rössler R, Schwendinger F, Schmidt-Trucksäss A, and Roth R
- Subjects
- Humans, Aged, Male, Female, Reproducibility of Results, Geriatric Assessment methods, Germany, Surveys and Questionnaires, Exercise physiology, Aged, 80 and over, Aging physiology, Aging psychology, Psychometrics methods, Hand Strength physiology, Quality of Life
- Abstract
Background: The University of Jyvaskyla Active Aging Scale (UJACAS) assesses active aging through willingness, ability, opportunity, and frequency of involvement in activities. Recognizing the lack of a German version, the Finnish original was translated (UJACAS-G). This study aimed: (1) to evaluate the test-retest reliability of UJACAS-G; and (2) to explore correlations with health-related parameters (concurrent validity)., Methods: The study (test-retest design) targeted healthy older adults aged 65+. Reliability of UJACAS-G (total and subscores) was assessed using Bland-Altman analyses and Intraclass Correlation Coefficients (ICCs). Furthermore, correlations (Spearman's rho) between UJACAS-G scores and physical function (walking speed, handgrip strength, balance, 6-minute walk distance), physical activity (International Physical Activity Questionnaire), life-space mobility (Life-Space Assessment), and health-related quality of life (Short Form-36 Health Survey) were calculated., Results: Bland-Altman analyses (N = 60; mean age 72.3, SD 5.9 years; 50% women) revealed mean differences close to zero and narrow limits of agreement for all scores (total score: mean difference -1.9; limits -31.7 to 27.9). The ability subscore showed clustering at its upper limit. ICC was 0.829 (95% CI 0.730 to 0.894) for the total score and ranged between 0.530 and 0.876 for subscores (all p-values < 0.001). The total score correlated with walking speed (rho = 0.345; p = 0.008), physical activity (rho = 0.279; p = 0.033) and mental health (rho = 0.329; p = 0.010)., Conclusions: UJACAS-G is reliable for assessing active aging among German-speaking healthy older adults. A potential 'ceiling effect' regarding the ability subscore should be considered when applying UJACAS-G to well-functioning populations. Analyses of concurrent validity indicated only weak correlations with health-related parameters., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
46. Best Time of Day for Strength and Endurance Training to Improve Health and Performance? A Systematic Review with Meta-analysis.
- Author
-
Bruggisser F, Knaier R, Roth R, Wang W, Qian J, and Scheer FAJL
- Abstract
Background: Current recommendations for physical exercise include information about the frequency, intensity, type, and duration of exercise. However, to date, there are no recommendations on what time of day one should exercise. The aim was to perform a systematic review with meta-analysis to investigate if the time of day of exercise training in intervention studies influences the degree of improvements in physical performance or health-related outcomes., Methods: The databases EMBASE, PubMed, Cochrane Library, and SPORTDiscus were searched from inception to January 2023. Eligibility criteria were that the studies conducted structured endurance and/or strength training with a minimum of two exercise sessions per week for at least 2 weeks and compared exercise training between at least two different times of the day using a randomized crossover or parallel group design., Results: From 14,125 screened articles, 26 articles were included in the systematic review of which seven were also included in the meta-analyses. Both the qualitative synthesis and the quantitative synthesis (i.e., meta-analysis) provide little evidence for or against the hypothesis that training at a specific time of day leads to more improvements in performance-related or health-related outcomes compared to other times. There was some evidence that there is a benefit when training and testing occur at the same time of day, mainly for performance-related outcomes. Overall, the risk of bias in most studies was high., Conclusions: The current state of research provides evidence neither for nor against a specific time of the day being more beneficial, but provides evidence for larger effects when there is congruency between training and testing times. This review provides recommendations to improve the design and execution of future studies on this topic., Registration: PROSPERO (CRD42021246468)., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
47. Phosphatized adductor muscle remains in a Cenomanian limid bivalve from Villers-sur-Mer (France).
- Author
-
Klug C, Hüne L, Roth R, and Hautmann M
- Abstract
Soft-tissue preservation in molluscs is generally rare, particularly in bivalves and gastropods. Here, we report a three-dimensionally preserved specimen of the limid Acesta clypeiformis from the Cenomanian of France that shows preservation of organic structures of the adductor muscles. Examination under UV-light revealed likely phosphatisation of organic remains, which was corroborated by EDX-analyses. We suggest that the parts of the adductor muscles that are very close to the attachment are particularly resistant to decay and thus may be preserved even under taphonomic conditions usually not favouring soft-tissue fossilisation., Competing Interests: Competing interestsWe have no competing interests., (© The Author(s) 2022.)
- Published
- 2022
- Full Text
- View/download PDF
48. Preservation of nautilid soft parts inside and outside the conch interpreted as central nervous system, eyes, and renal concrements from the Lebanese Cenomanian.
- Author
-
Klug C, Pohle A, Roth R, Hoffmann R, Wani R, and Tajika A
- Abstract
Nautilid, coleoid and ammonite cephalopods preserving jaws and soft tissue remains are moderately common in the extremely fossiliferous Konservat-Lagerstätte of the Hadjoula, Haqel and Sahel Aalma region, Lebanon. We assume that hundreds of cephalopod fossils from this region with soft-tissues lie in collections worldwide. Here, we describe two specimens of Syrionautilus libanoticus (Cymatoceratidae, Nautilida, Cephalopoda) from the Cenomanian of Hadjoula. Both specimens preserve soft parts, but only one shows an imprint of the conch. The specimen without conch displays a lot of anatomical detail. We homologise the fossilised structures as remains of the digestive tract, the central nervous system, the eyes, and the mantle. Small phosphatic structures in the middle of the body chamber of the specimen with conch are tentatively interpreted as renal concrements (uroliths). The absence of any trace of arms and the hood of the specimen lacking its conch is tentatively interpreted as an indication that this is another leftover fall (pabulite), where a predator lost parts of its prey. Other interpretations such as incomplete scavenging are also conceivable., Competing Interests: Competing interestsWe have no competing interests., (© The Author(s) 2021.)
- Published
- 2021
- Full Text
- View/download PDF
49. A multicenter paper-based and web-based system for collecting patient-reported outcome measures in patients undergoing local treatment for prostate cancer: first experiences.
- Author
-
Kowalski C, Roth R, Carl G, Feick G, Oesterle A, Hinkel A, Steiner T, Brock M, Kaftan B, Borowitz R, Zantl N, Heidenreich A, Neisius A, Darr C, Bolenz C, Beyer B, Pfitzenmaier J, Brehmer B, Fichtner J, Haben B, Wesselmann S, and Dieng S
- Abstract
Purpose: To give an overview of the multicenter Prostate Cancer Outcomes (PCO) study, involving paper-based and web-based collection of patient-reported outcome measures (PROM) in patients undergoing local treatment for prostate cancer in certified centers in Germany. The PCO study is part of the larger Movember-funded TrueNTH Global Registry. The article reports on the study's design and provides a brief progress report after the first 2 years of data collection., Methods: Prostate cancer centers (PCCs) certified according to German Cancer Society requirements were invited to participate in collecting patient-reported information on symptoms and function before and at least once (at 12 months) after treatment. The data were matched with disease and treatment information. This report describes progress in patient inclusion, response rate, and variations between centers relative to online/paper use, and also data quality, including recruitment variations relative to treatment in the first participating PCCs., Results: PCC participation increased over time; 44 centers had transferred data for 3094 patients at the time of this report. Patient recruitment varied widely across centers. Recruitment was highest among patients undergoing radical prostatectomy. The completeness of the data was good, except for comorbidity information., Conclusions: The PCO study benefits from a quality improvement system first established over 10 years ago, requiring collection and harmonization of a predefined clinical dataset across centers. Nevertheless, establishing a PROM routine requires substantial effort on the part of providers and constant monitoring in order to achieve high-quality data. The findings reported here may be useful for guiding implementation in similar initiatives.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.