18 results on '"Hassanpour, Saeed"'
Search Results
2. Masked Pre-Training of Transformers for Histology Image Analysis
- Author
-
Jiang, Shuai, Hondelink, Liesbeth, Suriawinata, Arief A., and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
In digital pathology, whole slide images (WSIs) are widely used for applications such as cancer diagnosis and prognosis prediction. Visual transformer models have recently emerged as a promising method for encoding large regions of WSIs while preserving spatial relationships among patches. However, due to the large number of model parameters and limited labeled data, applying transformer models to WSIs remains challenging. Inspired by masked language models, we propose a pretext task for training the transformer model without labeled data to address this problem. Our model, MaskHIT, uses the transformer output to reconstruct masked patches and learn representative histological features based on their positions and visual features. The experimental results demonstrate that MaskHIT surpasses various multiple instance learning approaches by 3% and 2% on survival prediction and cancer subtype classification tasks, respectively. Furthermore, MaskHIT also outperforms two of the most recent state-of-the-art transformer-based methods. Finally, a comparison between the attention maps generated by the MaskHIT model with pathologist's annotations indicates that the model can accurately identify clinically relevant histological structures in each task.
- Published
- 2023
3. Calibrating Histopathology Image Classifiers using Label Smoothing
- Author
-
Wei, Jerry, Torresani, Lorenzo, Wei, Jason, and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
The classification of histopathology images fundamentally differs from traditional image classification tasks because histopathology images naturally exhibit a range of diagnostic features, resulting in a diverse range of annotator agreement levels. However, examples with high annotator disagreement are often either assigned the majority label or discarded entirely when training histopathology image classifiers. This widespread practice often yields classifiers that do not account for example difficulty and exhibit poor model calibration. In this paper, we ask: can we improve model calibration by endowing histopathology image classifiers with inductive biases about example difficulty? We propose several label smoothing methods that utilize per-image annotator agreement. Though our methods are simple, we find that they substantially improve model calibration, while maintaining (or even improving) accuracy. For colorectal polyp classification, a common yet challenging task in gastrointestinal pathology, we find that our proposed agreement-aware label smoothing methods reduce calibration error by almost 70%. Moreover, we find that using model confidence as a proxy for annotator agreement also improves calibration and accuracy, suggesting that datasets without multiple annotators can still benefit from our proposed label smoothing methods via our proposed confidence-aware label smoothing methods. Given the importance of calibration (especially in histopathology image analysis), the improvements from our proposed techniques merit further exploration and potential implementation in other histopathology image classification tasks.
- Published
- 2022
4. HistoPerm: A Permutation-Based View Generation Approach for Improving Histopathologic Feature Representation Learning
- Author
-
DiPalma, Joseph, Torresani, Lorenzo, and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Deep learning has been effective for histology image analysis in digital pathology. However, many current deep learning approaches require large, strongly- or weakly-labeled images and regions of interest, which can be time-consuming and resource-intensive to obtain. To address this challenge, we present HistoPerm, a view generation method for representation learning using joint embedding architectures that enhances representation learning for histology images. HistoPerm permutes augmented views of patches extracted from whole-slide histology images to improve classification performance. We evaluated the effectiveness of HistoPerm on two histology image datasets for Celiac disease and Renal Cell Carcinoma, using three widely used joint embedding architecture-based representation learning methods: BYOL, SimCLR, and VICReg. Our results show that HistoPerm consistently improves patch- and slide-level classification performance in terms of accuracy, F1-score, and AUC. Specifically, for patch-level classification accuracy on the Celiac disease dataset, HistoPerm boosts BYOL and VICReg by 8% and SimCLR by 3%. On the Renal Cell Carcinoma dataset, patch-level classification accuracy is increased by 2% for BYOL and VICReg, and by 1% for SimCLR. In addition, on the Celiac disease dataset, models with HistoPerm outperform the fully-supervised baseline model by 6%, 5%, and 2% for BYOL, SimCLR, and VICReg, respectively. For the Renal Cell Carcinoma dataset, HistoPerm lowers the classification accuracy gap for the models up to 10% relative to the fully-supervised baseline. These findings suggest that HistoPerm can be a valuable tool for improving representation learning of histopathology features when access to labeled data is limited and can lead to whole-slide classification results that are comparable to or superior to fully-supervised methods.
- Published
- 2022
- Full Text
- View/download PDF
5. Managing the transportation of hazardous materials with time windows and uncertainties
- Author
-
Tasouji Hassanpour, Saeed
- Abstract
The dependence of modern life on numerous hazardous products is an undeniable matter. Considering the dangerous nature of these materials, providing versatile means of transportation is critical and essential. The flexibility and applicability of the mode of truck transportation have made it the most favorable method for conveying hazardous materials (hazmats), yet the shipping performance can be largely susceptible to ever-changing traffic and weather conditions. Inspired by the importance as well as lack of joint considerations of uncertainties, random disruptions, and time-relevant issues, this research plans to examine the location-routing decisions in hazmat transportation by applying stochastic and robust programming models to vehicle routing problems with time windows, so to ensure the corresponding efficiency, efficacy, and equity. Providing effective solutions to the hazmat locationrouting problems is of significant importance for both logistics companies and the government. Exact and heuristic algorithms will be explored for timely and accurate solutions. To assess the practicability and validity of the proposed approaches, realworld case studies will be investigated from the optimization perspective, from which we derive managerial insights that enhance decision-making for system stakeholders. In this regard, this thesis contributes to the current literature in the following three ways. First, we develop a scenario-based robust location-routing model for hazmat transportation with joint consideration of time windows, time-dependency, multiple existing paths between nodes, and disruptions. Second, a stochastic location-routing problem of infectious waste during a pandemic is discussed in a 3-tier network. Embedding temporary facilities, uncertainty, and chance-constrained time windows into the model, a brach-and-price algorithm is developed to solve the model to optimality. Finally, the stochastic location-routing problem of the hazardous waste network is addressed using a three-stage decision framework. The critical features involved in this model are stochastic waste release dates, the risk-aversion parameter, and the proposed decision framework of the model. The framework is built upon a costclustering approach, risk-oriented a priori plan, and recourse actions respectively for location-allocation, routing, and adaption decisions. We summarize the contributions of this thesis, discuss the overall results obtained, and present the potential future research directions.
- Published
- 2022
- Full Text
- View/download PDF
6. Interpretation Quality Score for Measuring the Quality of interpretability methods
- Author
-
Xie, Yuansheng, Vosoughi, Soroush, and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Computation and Language ,Computation and Language (cs.CL) ,Machine Learning (cs.LG) - Abstract
Machine learning (ML) models have been applied to a wide range of natural language processing (NLP) tasks in recent years. In addition to making accurate decisions, the necessity of understanding how models make their decisions has become apparent in many applications. To that end, many interpretability methods that help explain the decision processes of ML models have been developed. Yet, there currently exists no widely-accepted metric to evaluate the quality of explanations generated by these methods. As a result, there currently is no standard way of measuring to what degree an interpretability method achieves an intended objective. Moreover, there is no accepted standard of performance by which we can compare and rank the current existing interpretability methods. In this paper, we propose a novel metric for quantifying the quality of explanations generated by interpretability methods. We compute the metric on three NLP tasks using six interpretability methods and present our results.
- Published
- 2022
- Full Text
- View/download PDF
7. A Petri Dish for Histopathology Image Analysis
- Author
-
Wei, Jerry, Suriawinata, Arief, Ren, Bing, Liu, Xiaoying, Lisovsky, Mikhail, Vaickus, Louis, Brown, Charles, Baker, Michael, Tomita, Naofumi, Torresani, Lorenzo, Wei, Jason, and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
With the rise of deep learning, there has been increased interest in using neural networks for histopathology image analysis, a field that investigates the properties of biopsy or resected specimens traditionally manually examined under a microscope by pathologists. However, challenges such as limited data, costly annotation, and processing high-resolution and variable-size images make it difficult to quickly iterate over model designs. Throughout scientific history, many significant research directions have leveraged small-scale experimental setups as petri dishes to efficiently evaluate exploratory ideas. In this paper, we introduce a minimalist histopathology image analysis dataset (MHIST), an analogous petri dish for histopathology image analysis. MHIST is a binary classification dataset of 3,152 fixed-size images of colorectal polyps, each with a gold-standard label determined by the majority vote of seven board-certified gastrointestinal pathologists and annotator agreement level. MHIST occupies less than 400 MB of disk space, and a ResNet-18 baseline can be trained to convergence on MHIST in just 6 minutes using 3.5 GB of memory on a NVIDIA RTX 3090. As example use cases, we use MHIST to study natural questions such as how dataset size, network depth, transfer learning, and high-disagreement examples affect model performance. By introducing MHIST, we hope to not only help facilitate the work of current histopathology imaging researchers, but also make the field more-accessible to the general community. Our dataset is available at https://bmirds.github.io/MHIST., Comment: In proceedings of Artificial Intelligence in Medicine (AIME) 2021
- Published
- 2021
- Full Text
- View/download PDF
8. A Refined Deep Learning Architecture for Diabetic Foot Ulcers Detection
- Author
-
Goyal, Manu and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
Diabetic Foot Ulcers (DFU) that affect the lower extremities are a major complication of diabetes. Each year, more than 1 million diabetic patients undergo amputation due to failure to recognize DFU and get the proper treatment from clinicians. There is an urgent need to use a CAD system for the detection of DFU. In this paper, we propose using deep learning methods (EfficientDet Architectures) for the detection of DFU in the DFUC2020 challenge dataset, which consists of 4,500 DFU images. We further refined the EfficientDet architecture to avoid false negative and false positive predictions. The code for this method is available at https://github.com/Manugoyal12345/Yet-Another-EfficientDet-Pytorch., 8 Pages and DFUC Challenge
- Published
- 2020
9. Self-Supervised Contextual Language Representation of Radiology Reports to Improve the Identification of Communication Urgency
- Author
-
Meng, Xing, Ganoe, Craig H., Sieberg, Ryan T., Cheung, Yvonne Y., and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Computation and Language ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Articles ,Computation and Language (cs.CL) ,Machine Learning (cs.LG) - Abstract
Machine learning methods have recently achieved high-performance in biomedical text analysis. However, a major bottleneck in the widespread application of these methods is obtaining the required large amounts of annotated training data, which is resource intensive and time consuming. Recent progress in self-supervised learning has shown promise in leveraging large text corpora without explicit annotations. In this work, we built a self-supervised contextual language representation model using BERT, a deep bidirectional transformer architecture, to identify radiology reports requiring prompt communication to the referring physicians. We pre-trained the BERT model on a large unlabeled corpus of radiology reports and used the resulting contextual representations in a final text classifier for communication urgency. Our model achieved a precision of 97.0%, recall of 93.3%, and F-measure of 95.1% on an independent test set in identifying radiology reports for prompt communication, and significantly outperformed the previous state-of-the-art model based on word2vec representations., Comment: Accepted in AMIA 2020 Informatics Summit
- Published
- 2020
10. Multi-Ontology Refined Embeddings (MORE): A Hybrid Multi-Ontology and Corpus-based Semantic Representation for Biomedical Concepts
- Author
-
Jiang, Steven, Wu, Weiyi, Tomita, Naofumi, Ganoe, Craig, and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Science - Computation and Language ,Computation and Language (cs.CL) - Abstract
Objective: Currently, a major limitation for natural language processing (NLP) analyses in clinical applications is that a concept can be referenced in various forms across different texts. This paper introduces Multi-Ontology Refined Embeddings (MORE), a novel hybrid framework for incorporating domain knowledge from multiple ontologies into a distributional semantic model, learned from a corpus of clinical text. Materials and Methods: We use the RadCore and MIMIC-III free-text datasets for the corpus-based component of MORE. For the ontology-based part, we use the Medical Subject Headings (MeSH) ontology and three state-of-the-art ontology-based similarity measures. In our approach, we propose a new learning objective, modified from the Sigmoid cross-entropy objective function. Results and Discussion: We evaluate the quality of the generated word embeddings using two established datasets of semantic similarities among biomedical concept pairs. On the first dataset with 29 concept pairs, with the similarity scores established by physicians and medical coders, MORE's similarity scores have the highest combined correlation (0.633), which is 5.0% higher than that of the baseline model and 12.4% higher than that of the best ontology-based similarity measure.On the second dataset with 449 concept pairs, MORE's similarity scores have a correlation of 0.481, with the average of four medical residents' similarity ratings, and that outperforms the skip-gram model by 8.1% and the best ontology measure by 6.9%.
- Published
- 2020
11. Difficulty Translation in Histopathology Images
- Author
-
Wei, Jerry, Suriawinata, Arief, Liu, Xiaoying, Ren, Bing, Nasir-Moin, Mustafa, Tomita, Naofumi, Wei, Jason, and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION - Abstract
The unique nature of histopathology images opens the door to domain-specific formulations of image translation models. We propose a difficulty translation model that modifies colorectal histopathology images to be more challenging to classify. Our model comprises a scorer, which provides an output confidence to measure the difficulty of images, and an image translator, which learns to translate images from easy-to-classify to hard-to-classify using a training set defined by the scorer. We present three findings. First, generated images were indeed harder to classify for both human pathologists and machine learning classifiers than their corresponding source images. Second, image classifiers trained with generated images as augmented data performed better on both easy and hard images from an independent test set. Finally, human annotator agreement and our model's measure of difficulty correlated strongly, implying that for future work requiring human annotator agreement, the confidence score of a machine learning classifier could be used as a proxy., Comment: Accepted to 2020 Artificial Intelligence in Medicine (AIME) conference. Invited for long oral presentation
- Published
- 2020
- Full Text
- View/download PDF
12. Sensitivity and Specificity Evaluation of Deep Learning Models for Detection of Pneumoperitoneum on Chest Radiographs
- Author
-
Goyal, Manu, Austin-Strohbehn, Judith, Sun, Sean J., Rodriguez, Karen, Sin, Jessica M., Cheung, Yvonne Y., and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Background: Deep learning has great potential to assist with detecting and triaging critical findings such as pneumoperitoneum on medical images. To be clinically useful, the performance of this technology still needs to be validated for generalizability across different types of imaging systems. Materials and Methods: This retrospective study included 1,287 chest X-ray images of patients who underwent initial chest radiography at 13 different hospitals between 2011 and 2019. The chest X-ray images were labelled independently by four radiologist experts as positive or negative for pneumoperitoneum. State-of-the-art deep learning models (ResNet101, InceptionV3, DenseNet161, and ResNeXt101) were trained on a subset of this dataset, and the automated classification performance was evaluated on the rest of the dataset by measuring the AUC, sensitivity, and specificity for each model. Furthermore, the generalizability of these deep learning models was assessed by stratifying the test dataset according to the type of the utilized imaging systems. Results: All deep learning models performed well for identifying radiographs with pneumoperitoneum, while DenseNet161 achieved the highest AUC of 95.7%, Specificity of 89.9%, and Sensitivity of 91.6%. DenseNet161 model was able to accurately classify radiographs from different imaging systems (Accuracy: 90.8%), while it was trained on images captured from a specific imaging system from a single institution. This result suggests the generalizability of our model for learning salient features in chest X-ray images to detect pneumoperitoneum, independent of the imaging system., Comment: 21 Pages, 4 Tables and 6 Figures
- Published
- 2020
- Full Text
- View/download PDF
13. Predicting colorectal polyp recurrence using time-to-event analysis of medical records
- Author
-
Harrington, Lia X., Wei, Jason W., Suriawinata, Arief A., Mackenzie, Todd A., and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,surgical procedures, operative ,Statistics - Machine Learning ,otorhinolaryngologic diseases ,Applications (stat.AP) ,Machine Learning (stat.ML) ,pathological conditions, signs and symptoms ,Statistics - Applications ,neoplasms ,digestive system diseases ,Machine Learning (cs.LG) - Abstract
Identifying patient characteristics that influence the rate of colorectal polyp recurrence can provide important insights into which patients are at higher risk for recurrence. We used natural language processing to extract polyp morphological characteristics from 953 polyp-presenting patients' electronic medical records. We used subsequent colonoscopy reports to examine how the time to polyp recurrence (731 patients experienced recurrence) is influenced by these characteristics as well as anthropometric features using Kaplan-Meier curves, Cox proportional hazards modeling, and random survival forest models. We found that the rate of recurrence differed significantly by polyp size, number, and location and patient smoking status. Additionally, right-sided colon polyps increased recurrence risk by 30% compared to left-sided polyps. History of tobacco use increased polyp recurrence risk by 20% compared to never-users. A random survival forest model showed an AUC of 0.65 and identified several other predictive variables, which can inform development of personalized polyp surveillance plans., Comment: Accepted in AMIA 2020 Informatics Summit
- Published
- 2019
- Full Text
- View/download PDF
14. Additional file 1: of Automated detection of nonmelanoma skin cancer using digital images: a systematic review
- Author
-
Marka, Arthur, Joi Carter, Toto, Ermal, and Hassanpour, Saeed
- Abstract
PRISMA checklist. (DOC 66 kb)
- Published
- 2019
- Full Text
- View/download PDF
15. Deep neural networks for automated classification of colorectal polyps on histopathology slides: A multi-institutional evaluation
- Author
-
Wei, Jason W., Suriawinata, Arief A., Vaickus, Louis J., Ren, Bing, Liu, Xiaoying, Lisovsky, Mikhail, Tomita, Naofumi, Abdollahi, Behnaz, Kim, Adam S., Snover, Dale C., Baron, John A., Barry, Elizabeth L., and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Histological classification of colorectal polyps plays a critical role in both screening for colorectal cancer and care of affected patients. An accurate and automated algorithm for the classification of colorectal polyps on digitized histopathology slides could benefit clinicians and patients. Evaluate the performance and assess the generalizability of a deep neural network for colorectal polyp classification on histopathology slide images using a multi-institutional dataset. In this study, we developed a deep neural network for classification of four major colorectal polyp types, tubular adenoma, tubulovillous/villous adenoma, hyperplastic polyp, and sessile serrated adenoma, based on digitized histopathology slides from our institution, Dartmouth-Hitchcock Medical Center (DHMC), in New Hampshire. We evaluated the deep neural network on an internal dataset of 157 histopathology slide images from DHMC, as well as on an external dataset of 238 histopathology slide images from 24 different institutions spanning 13 states in the United States. We measured accuracy, sensitivity, and specificity of our model in this evaluation and compared its performance to local pathologists' diagnoses at the point-of-care retrieved from corresponding pathology laboratories. For the internal evaluation, the deep neural network had a mean accuracy of 93.5% (95% CI 89.6%-97.4%), compared with local pathologists' accuracy of 91.4% (95% CI 87.0%-95.8%). On the external test set, the deep neural network achieved an accuracy of 87.0% (95% CI 82.7%-91.3%), comparable with local pathologists' accuracy of 86.6% (95% CI 82.3%-90.9%). If confirmed in clinical settings, our model could assist pathologists by improving the diagnostic efficiency, reproducibility, and accuracy of colorectal cancer screenings.
- Published
- 2019
- Full Text
- View/download PDF
16. Generative Image Translation for Data Augmentation in Colorectal Histopathology Images
- Author
-
Wei, Jerry, Suriawinata, Arief, Vaickus, Louis, Ren, Bing, Liu, Xiaoying, Wei, Jason, and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing ,Article - Abstract
We present an image translation approach to generate augmented data for mitigating data imbalances in a dataset of histopathology images of colorectal polyps, adenomatous tumors that can lead to colorectal cancer if left untreated. By applying cycle-consistent generative adversarial networks (CycleGANs) to a source domain of normal colonic mucosa images, we generate synthetic colorectal polyp images that belong to diagnostically less common polyp classes. Generated images maintain the general structure of their source image but exhibit adenomatous features that can be enhanced with our proposed filtration module, called Path-Rank-Filter. We evaluate the quality of generated images through Turing tests with four gastrointestinal pathologists, finding that at least two of the four pathologists could not identify generated images at a statistically significant level. Finally, we demonstrate that using CycleGAN-generated images to augment training data improves the AUC of a convolutional neural network for detecting sessile serrated adenomas by over 10%, suggesting that our approach might warrant further research for other histopathology image classification tasks., Comment: NeurIPS 2019 Machine Learning for Health Workshop Full Paper (19/111 accepted papers = 17% acceptance rate)
- Published
- 2019
- Full Text
- View/download PDF
17. Deep Learning Methods and Applications for Region of Interest Detection in Dermoscopic Images
- Author
-
Goyal, Manu, Yap, Moi Hoon, and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Rapid growth in the development of medical imaging analysis technology has been propelled by the great interest in improving computer-aided diagnosis and detection (CAD) systems for three popular image visualization tasks: classification, segmentation, and Region of Interest (ROI) detection. However, a limited number of datasets with ground truth annotations are available for developing segmentation and ROI detection of lesions, as expert annotations are laborious and expensive. Detecting the ROI is vital to locate lesions accurately. In this paper, we propose the use of two deep object detection meta-architectures (Faster R-CNN Inception-V2 and SSD Inception-V2) to develop robust ROI detection of skin lesions in dermoscopic datasets (2017 ISIC Challenge, PH2, and HAM10000), and compared the performance with state-of-the-art segmentation algorithm (DeeplabV3+). To further demonstrate the potential of our work, we built a smartphone application for real-time automated detection of skin lesions based on this methodology. In addition, we developed an automated natural data-augmentation method from ROI detection to produce augmented copies of dermoscopic images, as a pre-processing step in the segmentation of skin lesions to further improve the performance of the current state-of-the-art deep learning algorithm. Our proposed ROI detection has the potential to more appropriately streamline dermatology referrals and reduce unnecessary biopsies in the diagnosis of skin cancer., Natural Augmentation
- Published
- 2018
18. Deep-Learning for Classification of Colorectal Polyps on Whole-Slide Images
- Author
-
Korbar, Bruno, Olofson, Andrea M., Miraflor, Allen P., Nicka, Katherine M., Suriawinata, Matthew A., Torresani, Lorenzo, Suriawinata, Arief A., and Hassanpour, Saeed
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,digestive system diseases - Abstract
Histopathological characterization of colorectal polyps is an important principle for determining the risk of colorectal cancer and future rates of surveillance for patients. This characterization is time-intensive, requires years of specialized training, and suffers from significant inter-observer and intra-observer variability. In this work, we built an automatic image-understanding method that can accurately classify different types of colorectal polyps in whole-slide histology images to help pathologists with histopathological characterization and diagnosis of colorectal polyps. The proposed image-understanding method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Our image-understanding method covers all five polyp types (hyperplastic polyp, sessile serrated polyp, traditional serrated adenoma, tubular adenoma, and tubulovillous/villous adenoma) that are included in the US multi-society task force guidelines for colorectal cancer risk assessment and surveillance, and encompasses the most common occurrences of colorectal polyps. Our evaluation on 239 independent test samples shows our proposed method can identify the types of colorectal polyps in whole-slide images with a high efficacy (accuracy: 93.0%, precision: 89.7%, recall: 88.3%, F1 score: 88.8%). The presented method in this paper can reduce the cognitive burden on pathologists and improve their accuracy and efficiency in histopathological characterization of colorectal polyps, and in subsequent risk assessment and follow-up recommendations.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.