317 results on '"Aurélio Campilho"'
Search Results
52. Central Medialness Adaptive Strategy for 3D Lung Nodule Segmentation in Thoracic CT Images.
- Author
-
Luis Gonçalves, Jorge Novo, and Aurélio Campilho
- Published
- 2016
- Full Text
- View/download PDF
53. Motion Descriptor for Human Gesture Recognition in Low Resolution Images.
- Author
-
António Ferreira, Guilherme Silva 0001, André Dias, Alfredo Martins, and Aurélio Campilho
- Published
- 2015
- Full Text
- View/download PDF
54. Retinal and choroidal vasoreactivity in central serous chorioretinopathy
- Author
-
Susana Penas, Teresa Araújo, Ana Maria Mendonça, Simão Faria, Jorge Silva, Aurélio Campilho, Maria Lurdes Martins, Vânia Sousa, Amândio Rocha-Sousa, Ângela Carneiro, and Fernando Falcão-Reis
- Subjects
Cellular and Molecular Neuroscience ,Ophthalmology ,Central Serous Chorioretinopathy ,Choroid ,Visual Acuity ,Humans ,Pilot Projects ,Fluorescein Angiography ,Tomography, Optical Coherence ,Sensory Systems ,Retrospective Studies - Abstract
This study aims to investigate retinal and choroidal vascular reactivity to carbogen in central serous chorioretinopathy (CSC) patients.An experimental pilot study including 68 eyes from 20 CSC patients and 14 age and sex-matched controls was performed. The participants inhaled carbogen (5% CONo significant differences were detected in baseline hemodynamic parameters between both groups. A significant positive correlation was found between the participants' age and arterial diameter variation (p 0.001, r = 0.447), meaning that younger participants presented a more vasoconstrictive response (negative variation) than older ones. No significant differences were detected in the vasoreactive response between CSC and controls for both arterial and venous vessels (p = 0.63 and p = 0.85, respectively). Although the vascular reactivity was not related to the activity of CSC, it was related to the time of disease, for both the arterial (p = 0.02, r = 0.381) and venous (p = 0.001, r = 0.530) beds. SFCT and MCCT were highly correlated (r = 0.830, p 0.001). Both SFCT and MCCT significantly increased in CSC patients (p 0.001 and p 0.001) but not in controls (p = 0.059 and 0.247). A significant negative correlation between CSC patients' age and MCCT variation (r = - 0.340, p = 0.049) was detected. In CSC patients, the choroidal thickness variation was not related to the activity state, time of disease, or previous photodynamic treatment.Vasoreactivity to carbogen was similar in the retinal vessels but significantly higher in the choroidal vessels of CSC patients when compared to controls, strengthening the hypothesis of a choroidal regulation dysfunction in this pathology.
- Published
- 2022
55. <scp> DeepFixCX </scp> : Explainable privacy‐preserving image compression for medical image analysis
- Author
-
Alex Gaudio, Asim Smailagic, Christos Faloutsos, Shreshta Mohan, Elvin Johnson, Yuhao Liu, Pedro Costa, and Aurélio Campilho
- Subjects
General Computer Science - Published
- 2023
56. Retinal layer and fluid segmentation in optical coherence tomography images using a hierarchical framework
- Author
-
Tânia Melo, Ângela Carneiro, Aurélio Campilho, and Ana Maria Mendonça
- Subjects
Radiology, Nuclear Medicine and imaging - Published
- 2023
57. ExplainFix: Explainable Spatially Fixed Deep Networks
- Author
-
Alex Gaudio, Christos Faloutsos, Asim Smailagic, Pedro Costa, and Aurélio Campilho
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,General Computer Science ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
Is there an initialization for deep networks that requires no learning? ExplainFix adopts two design principles: the "fixed filters" principle that all spatial filter weights of convolutional neural networks can be fixed at initialization and never learned, and the "nimbleness" principle that only few network parameters suffice. We contribute (a) visual model-based explanations, (b) speed and accuracy gains, and (c) novel tools for deep convolutional neural networks. ExplainFix gives key insights that spatially fixed networks should have a steered initialization, that spatial convolution layers tend to prioritize low frequencies, and that most network parameters are not necessary in spatially fixed models. ExplainFix models have up to 100x fewer spatial filter kernels than fully learned models and matching or improved accuracy. Our extensive empirical analysis confirms that ExplainFix guarantees nimbler models (train up to 17\% faster with channel pruning), matching or improved predictive performance (spanning 13 distinct baseline models, four architectures and two medical image datasets), improved robustness to larger learning rate, and robustness to varying model size. We are first to demonstrate that all spatial filters in state-of-the-art convolutional deep networks can be fixed at initialization, not learned., Comment: Recently Published in Wiley WIREs Journal of Data Mining and Knowledge Discovery. This version has minor formatting differences and includes the supplementary appendix with the main document. Source code: https://github.com/adgaudio/ExplainFix/
- Published
- 2023
- Full Text
- View/download PDF
58. Machine learning for medical applications.
- Author
-
Verónica Bolón-Canedo, Beatriz Remeseiro, Amparo Alonso-Betanzos, and Aurélio Campilho
- Published
- 2016
59. Feature definition, analysis and selection for lung nodule classification in chest computerized tomography images.
- Author
-
Luis Gonçalves, Jorge Novo, and Aurélio Campilho
- Published
- 2016
60. Lightweight multi-scale classification of chest radiographs via size-specific batch normalization
- Author
-
Sofia C. Pereira, Joana Rocha, Aurélio Campilho, Pedro Sousa, and Ana Maria Mendonça
- Subjects
Health Informatics ,Software ,Computer Science Applications - Published
- 2023
61. Detection of juxta-pleural lung nodules in computed tomography images.
- Author
-
Guilherme Aresta, António Cunha, and Aurélio Campilho
- Published
- 2017
- Full Text
- View/download PDF
62. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs.
- Author
-
Ana Maria Mendonça, Beatriz Remeseiro, Behdad Dashtbozorg, and Aurélio Campilho
- Published
- 2017
- Full Text
- View/download PDF
63. Estimation of retinal vessel caliber using model fitting and random forests.
- Author
-
Teresa Araujo, Ana Maria Mendonça, and Aurélio Campilho
- Published
- 2017
- Full Text
- View/download PDF
64. Artificial Intelligence Improves the Accuracy in Histologic Classification of Breast Lesions
- Author
-
Rita Canas-Marques, António Polónia, Daniel Pinto, Paulo Aguiar, Sofia Campelos, Catarina Eloy, Magdalena Biskup-Frużyńska, Ierece Aymore, Guilherme Aresta, Ricardo Santana Veiga, Aurélio Campilho, Ana C.F. Ribeiro, Teresa Araújo, and Scotty Kwok
- Subjects
0301 basic medicine ,Breast tissue ,Invasive carcinoma ,business.industry ,Carcinoma in situ ,Breast Neoplasms ,General Medicine ,medicine.disease ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Artificial Intelligence ,030220 oncology & carcinogenesis ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Female ,Diagnosis, Computer-Assisted ,Artificial intelligence ,business - Abstract
Objectives This study evaluated the usefulness of artificial intelligence (AI) algorithms as tools in improving the accuracy of histologic classification of breast tissue. Methods Overall, 100 microscopic photographs (test A) and 152 regions of interest in whole-slide images (test B) of breast tissue were classified into 4 classes: normal, benign, carcinoma in situ (CIS), and invasive carcinoma. The accuracy of 4 pathologists and 3 pathology residents were evaluated without and with the assistance of algorithms. Results In test A, algorithm A had accuracy of 0.87, with the lowest accuracy in the benign class (0.72). The observers had average accuracy of 0.80, and most clinically relevant discordances occurred in distinguishing benign from CIS (7.1% of classifications). With the assistance of algorithm A, the observers significantly increased their average accuracy to 0.88. In test B, algorithm B had accuracy of 0.49, with the lowest accuracy in the CIS class (0.06). The observers had average accuracy of 0.86, and most clinically relevant discordances occurred in distinguishing benign from CIS (6.3% of classifications). With the assistance of algorithm B, the observers maintained their average accuracy. Conclusions AI tools can increase the classification accuracy of pathologists in the setting of breast lesions.
- Published
- 2020
65. Automatic Lung Nodule Detection Combined With Gaze Information Improves Radiologists’ Screening Performance
- Author
-
João Pedrosa, Carlos A. Ferreira, Joao Rebelo, Antonio José Ledo Alves da Cunha, Teresa Araújo, Filipe Alves, Margarida Morgado, Aurélio Campilho, Guilherme Aresta, Isabel Ramos, and Eduardo Negrão
- Subjects
Lung Neoplasms ,Computer science ,Fixation, Ocular ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Text mining ,Health Information Management ,Radiologists ,medicine ,Humans ,Sensitivity (control systems) ,Electrical and Electronic Engineering ,Eye-Tracking Technology ,Lung cancer ,business.industry ,Pattern recognition ,medicine.disease ,Gaze ,Computer Science Applications ,Informatics ,Fixation (visual) ,Task analysis ,Radiographic Image Interpretation, Computer-Assisted ,Eye tracking ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,030217 neurology & neurosurgery ,Biotechnology - Abstract
Early diagnosis of lung cancer via computed tomography can significantly reduce the morbidity and mortality rates associated with the pathology. However, searching lung nodules is a high complexity task, which affects the success of screening programs. Whilst computer-aided detection systems can be used as second observers, they may bias radiologists and introduce significant time overheads. With this in mind, this study assesses the potential of using gaze information for integrating automatic detection systems in the clinical practice. For that purpose, 4 radiologists were asked to annotate 20 scans from a public dataset while being monitored by an eye tracker device, and an automatic lung nodule detection system was developed. Our results show that radiologists follow a similar search routine and tend to have lower fixation periods in regions where finding errors occur. The overall detection sensitivity of the specialists was $\mathbf {0.67\pm 0.07}$ , whereas the system achieved 0.69. Combining the annotations of one radiologist with the automatic system significantly improves the detection performance to similar levels of two annotators. Filtering automatic detection candidates only for low fixation regions still significantly improves the detection sensitivity without increasing the number of false-positives.
- Published
- 2020
66. Data Augmentation for Improving Proliferative Diabetic Retinopathy Detection in Eye Fundus Images
- Author
-
Ana Maria Mendonça, Guilherme Aresta, Aurélio Campilho, Luís Mendonça, Susana Penas, Carolina Maia, Teresa Araújo, and Angela Carneiro
- Subjects
Data augmentation ,020205 medical informatics ,General Computer Science ,Computer science ,02 engineering and technology ,Fundus (eye) ,030218 nuclear medicine & medical imaging ,Neovascularization ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,General Materials Science ,Retina ,business.industry ,General Engineering ,deep learning ,Retinal ,Pattern recognition ,Image segmentation ,Diabetic retinopathy ,medicine.disease ,eye diseases ,medicine.anatomical_structure ,chemistry ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,neovascularization ,medicine.symptom ,business ,lcsh:TK1-9971 ,proliferative diabetic retinopathy ,Retinopathy - Abstract
Proliferative diabetic retinopathy (PDR) is an advanced diabetic retinopathy stage, characterized by neovascularization, which leads to ocular complications and severe vision loss. However, the available DR-labeled retinal image datasets have a small representation of images of the severest DR grades, and thus there is lack of PDR cases for training DR grading models. Additionally, the criteria for labelling these images in the publicly available datasets is not always clear, with some images which do not show typical PDR lesions being labeled as PDR due to the presence of photo-coagulation treatment and laser marks. This problem, together with the datasets' high class imbalance, leads to a limited variability of the samples, which the typical data augmentation and class balancing cannot fully mitigate. We propose a heuristic-based data augmentation scheme based on the synthesis of neovessel (NV)-like structures that compensates for the lack of PDR cases in DR-labeled datasets. The proposed neovessel generation algorithm relies on the general knowledge of common location and shape of these structures. NVs are generated and introduced in pre-existent retinal images which can then be used for enlarging deep neural networks' training sets. The data augmentation scheme was tested on multiple datasets, and allows to improve the model's capacity to detect NVs.
- Published
- 2020
67. Automatic Label Detection in Chest Radiography Images
- Author
-
Aurélio Campilho, Ana Mendonça, Carlos Ferreira, Guilherme Aresta, and João Pedrosa
- Published
- 2022
68. Segmentation of COVID-19 Lesions in CT Images
- Author
-
Sofia I.A. Pereira, Joana Rocha, Aurélio Campilho, and Ana Maria Mendonça
- Subjects
business.industry ,Process (engineering) ,Computer science ,Second opinion ,Pattern recognition ,Image segmentation ,Lesion ,Data set ,medicine ,Segmentation ,Artificial intelligence ,medicine.symptom ,business ,Encoder ,Reliability (statistics) - Abstract
The worldwide pandemic caused by the new coronavirus (COVID-19) has encouraged the development of multiple computer-aided diagnosis systems to automate daily clinical tasks, such as abnormality detection and classification. Among these tasks, the segmentation of COVID lesions is of high interest to the scientific community, enabling further lesion characterization. Automating the segmentation process can be a useful strategy to provide a fast and accurate second opinion to the physicians, and thus increase the reliability of the diagnosis and disease stratification. The current work explores a CNN-based approach to segment multiple COVID lesions. It includes the implementation of a U-Net structure with a ResNet34 encoder able to deal with the highly imbalanced nature of the problem, as well as the great variability of the COVID lesions, namely in terms of size, shape, and quantity. This approach yields a Dice score of 64.1%, when evaluated on the publicly available COVID-19-20 Lung CT Lesion Segmentation GrandChallenge data set.
- Published
- 2021
69. iW-Net: an automatic and minimalistic interactive lung nodule segmentation deep network
- Author
-
Isabel Ramos, Antonio José Ledo Alves da Cunha, Teresa Araújo, Aurélio Campilho, Bram van Ginneken, Guilherme Aresta, and Colin Jacobs
- Subjects
Lung Diseases ,0301 basic medicine ,FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Boundary (topology) ,lcsh:Medicine ,Article ,Automation ,03 medical and health sciences ,All institutes and research themes of the Radboud University Medical Center ,0302 clinical medicine ,medicine ,Humans ,Segmentation ,lcsh:Science ,Computed tomography ,Early Detection of Cancer ,Multidisciplinary ,business.industry ,Intersection (set theory) ,Deep learning ,lcsh:R ,Pattern recognition ,Nodule (medicine) ,Function (mathematics) ,Net (mathematics) ,Computer science ,030104 developmental biology ,ComputingMethodologies_PATTERNRECOGNITION ,Feature (computer vision) ,lcsh:Q ,Neural Networks, Computer ,Artificial intelligence ,medicine.symptom ,business ,Biomedical engineering ,Algorithms ,030217 neurology & neurosurgery ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
We propose iW-Net, a deep learning model that allows for both automatic and interactive segmentation of lung nodules in computed tomography images. iW-Net is composed of two blocks: the first one provides an automatic segmentation and the second one allows to correct it by analyzing 2 points introduced by the user in the nodule's boundary. For this purpose, a physics inspired weight map that takes the user input into account is proposed, which is used both as a feature map and in the system's loss function. Our approach is extensively evaluated on the public LIDC-IDRI dataset, where we achieve a state-of-the-art performance of 0.55 intersection over union vs the 0.59 inter-observer agreement. Also, we show that iW-Net allows to correct the segmentation of small nodules, essential for proper patient referral decision, as well as improve the segmentation of the challenging non-solid nodules and thus may be an important tool for increasing the early diagnosis of lung cancer., Comment: Pre-print submitted to IEEE Transactions on Biomedical Engineering
- Published
- 2019
70. BACH: Grand challenge on breast cancer histology images
- Author
-
Minh Nguyen Nhat To, Eal Kim, Christoph Walz, Ismael Kone, Lingling Sun, Yaqi Wang, Kaiqiang Ma, Veronica Sanchez-Freire, Florian Ludwig, Sameh Galal, Paulo Aguiar, António Polónia, Teresa Araújo, Quoc Dang Vu, Lahsen Boulmane, Gerardo Fernandez, Jin Tae Kwak, Michael J. Donovan, Jiannan Fang, Jack Zeineh, Nadia Brancati, Scotty Kwok, Catarina Eloy, Stefan Braunewell, Mohammed Safwan, Maximilian Baust, Monica Chan, Aurélio Campilho, Guilherme Aresta, Marcel Prastawa, Varghese Alex, Daniel Riccio, Sai Saketh Chennamsetty, Bahram Marami, Matthias Kohl, Maria Frucci, Aresta, G., Araujo, T., Kwok, S., Chennamsetty, S. S., Safwan, M., Alex, V., Marami, B., Prastawa, M., Chan, M., Tagg, PHILIP DONOVAN, Fernandez, G., Zeineh, J., Kohl, M., Walz, C., Enzensberger, LUDWIG HORST WERNER, Braunewell, S., Baust, M., Vu, Q. D., To, M. N. N., Kim, E., Kwak, J. T., Galal, S., Sanchez-Freire, V., Brancati, N., Frucci, M., Riccio, D., Wang, Y., Sun, L., Ma, K., Fang, J., Kone, I., Boulmane, L., Campilho, A., Eloy, C., Polonia, Alina, and Aguiar, P.
- Subjects
FOS: Computer and information sciences ,medicine.medical_specialty ,Histology ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Breast Neoplasms ,Health Informatics ,Pattern Recognition, Automated ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,breast cancer ,0302 clinical medicine ,Breast cancer ,Digital pathology ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,Relevance (information retrieval) ,Medical physics ,Challenge ,Microscopy ,Invasive carcinoma ,Staining and Labeling ,Radiological and Ultrasound Technology ,business.industry ,Deep learning ,medicine.disease ,Computer Graphics and Computer-Aided Design ,Statistical classification ,Positive response ,classification ,Female ,Comparative study ,Neural Networks, Computer ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithms ,030217 neurology & neurosurgery - Abstract
Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time- and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). A large annotated dataset, composed of both microscopy and whole-slide images, was specifically compiled and made publicly available for the BACH challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. From the submitted algorithms it was possible to push forward the state-of-the-art in terms of accuracy (87%) in automatic classification of breast cancer with histopathological images. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publically available as to promote further improvements to the field of automatic classification in digital pathology., Accepted for publication at Medical Image Analysis (Elsevier). Publication licensed under the Creative Commons CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
- Published
- 2019
71. A Study on Annotation Efficient Learning Methods for Segmentation in Prostate Histopathological Images
- Author
-
Jaime Cardoso, Pedro Costa, and Aurélio Campilho
- Published
- 2021
72. LNDb challenge on automatic lung cancer patient management
- Author
-
João Pedrosa, Krishna Chaitanya Kaluva, Yizhuan Jia, Rongzhen Chen, Guilherme Aresta, Gurraj Atwal, Suthirth Vaidya, Zhongwei Sun, Zhenhuan Tian, Jiaoliang Li, Liansheng Wang, Antonio José Ledo Alves da Cunha, Xiaoyu Chen, Isabel Ramos, Xuejun Men, Sambit Tarai, Ildoo Kim, Carlos Abreu Ferreira, Abhijith Chunduru, Kiran Vaidhya, Adrian Galdran, Aurélio Campilho, Hady Ahmady Phoulady, Sai Prasad Pranav Nadimpalli, Alexandr G. Rassadin, and Hamid Bouchachia
- Subjects
medicine.medical_specialty ,Jaccard index ,Lung Neoplasms ,Databases, Factual ,Health Informatics ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Lung cancer ,Survival rate ,Retrospective Studies ,Data collection ,Radiological and Ultrasound Technology ,business.industry ,Cancer ,Solitary Pulmonary Nodule ,Nodule (medicine) ,medicine.disease ,Computer Graphics and Computer-Aided Design ,Computer Vision and Pattern Recognition ,Radiology ,medicine.symptom ,business ,Tomography, X-Ray Computed ,030217 neurology & neurosurgery ,Kappa ,Algorithms - Abstract
Lung cancer is the deadliest type of cancer worldwide and late detection is the major factor for the low survival rate of patients. Low dose computed tomography has been suggested as a potential screening tool but manual screening is costly and time-consuming. This has fuelled the development of automatic methods for the detection, segmentation and characterisation of pulmonary nodules. In spite of promising results, the application of automatic methods to clinical routine is not straightforward and only a limited number of studies have addressed the problem in a holistic way. With the goal of advancing the state of the art, the Lung Nodule Database (LNDb) Challenge on automatic lung cancer patient management was organized. The LNDb Challenge addressed lung nodule detection, segmentation and characterization as well as prediction of patient follow-up according to the 2017 Fleischner society pulmonary nodule guidelines. 294 CT scans were thus collected retrospectively at the Centro Hospitalar e Universitrio de So Joo in Porto, Portugal and each CT was annotated by at least one radiologist. Annotations comprised nodule centroids, segmentations and subjective characterization. 58 CTs and the corresponding annotations were withheld as a separate test set. A total of 947 users registered for the challenge and 11 successful submissions for at least one of the sub-challenges were received. For patient follow-up prediction, a maximum quadratic weighted Cohen's kappa of 0.580 was obtained. In terms of nodule detection, a sensitivity below 0.4 (and 0.7) at 1 false positive per scan was obtained for nodules identified by at least one (and two) radiologist(s). For nodule segmentation, a maximum Jaccard score of 0.567 was obtained, surpassing the interobserver variability. In terms of nodule texture characterization, a maximum quadratic weighted Cohen's kappa of 0.733 was obtained, with part solid nodules being particularly challenging to classify correctly. Detailed analysis of the proposed methods and the differences in performance allow to identify the major challenges remaining and future directions - data collection, augmentation/generation and evaluation of under-represented classes, the incorporation of scan-level information for better decision-making and the development of tools and challenges with clinical-oriented goals. The LNDb Challenge and associated data remain publicly available so that future methods can be tested and benchmarked, promoting the development of new algorithms in lung cancer medical image analysis and patient follow-up recommendation.
- Published
- 2020
73. Microaneurysm detection in color eye fundus images for diabetic retinopathy screening
- Author
-
Ana Maria Mendonça, Tânia Melo, and Aurélio Campilho
- Subjects
0301 basic medicine ,Computer science ,Fundus Oculi ,media_common.quotation_subject ,Feature extraction ,Health Informatics ,03 medical and health sciences ,0302 clinical medicine ,medicine ,False positive paradox ,Diabetes Mellitus ,Contrast (vision) ,Humans ,media_common ,Microaneurysm ,Diabetic Retinopathy ,Blindness ,business.industry ,Diabetic retinopathy screening ,Pattern recognition ,Diabetic retinopathy ,Filter (signal processing) ,medicine.disease ,Computer Science Applications ,030104 developmental biology ,Early Diagnosis ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Algorithms - Abstract
Diabetic retinopathy (DR) is a diabetes complication, which in extreme situations may lead to blindness. Since the first stages are often asymptomatic, regular eye examinations are required for an early diagnosis. As microaneurysms (MAs) are one of the first signs of DR, several automated methods have been proposed for their detection in order to reduce the ophthalmologists' workload. Although local convergence filters (LCFs) have already been applied for feature extraction, their potential as MA enhancement operators was not explored yet. In this work, we propose a sliding band filter for MA enhancement aiming at obtaining a set of initial MA candidates. Then, a combination of the filter responses with color, contrast and shape information is used by an ensemble of classifiers for final candidate classification. Finally, for each eye fundus image, a score is computed from the confidence values assigned to the MAs detected in the image. The performance of the proposed methodology was evaluated in four datasets. At the lesion level, sensitivities of 64% and 81% were achieved for an average of 8 false positives per image (FPIs) in e-ophtha MA and SCREEN-DR, respectively. In the last dataset, an AUC of 0.83 was also obtained for DR detection.
- Published
- 2020
74. Classification of Lung Nodules in CT Volumes Using the Lung-RADS™ Guidelines with Uncertainty Parameterization
- Author
-
Carlos Abreu Ferreira, Joao Rebelo, Antonio José Ledo Alves da Cunha, Eduardo Negrão, Isabel Ramos, Guilherme Aresta, Aurélio Campilho, and João Pedrosa
- Subjects
business.industry ,Computer science ,Deep learning ,Nodule (medicine) ,Sample (statistics) ,02 engineering and technology ,Function (mathematics) ,medicine.disease ,Malignancy ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Region of interest ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,medicine.symptom ,Lung cancer ,business ,computer - Abstract
Currently, lung cancer is the most lethal in the world. In order to make screening and follow-up a little more systematic, guidelines have been proposed. Therefore, this study aimed to create a diagnostic support approach by providing a patient label based on the LUNG-RADS™ guidelines. The only input required by the system is the nodule centroid to take the region of interest for the input of the classification system. With this in mind, two deep learning networks were evaluated: a Wide Residual Network and a DenseNet. Taking into account the annotation uncertainty we proposed to use sample weights that are introduced in the loss function, allowing nodules with a high agreement in the annotation process to take a greater impact on the training error than its counterpart. The best result was achieved with the Wide Residual Network with sample weights achieving a nodule-wise LUNG-RADS™ labelling accuracy of $0.735\pm 0.003$ .
- Published
- 2020
75. O‐MedAL: Online active deep learning for medical image analysis
- Author
-
Mostafa Mirshekari, Alex Gaudio, Susu Xu, Adrian Galdran, Pedro Alves Costa, Aurélio Campilho, Asim Smailagic, Hae Young Noh, Kartik Khandelwal, Devesh Walawalkar, Pei Zhang, and Jonathon Fagert
- Subjects
FOS: Computer and information sciences ,Medal ,Computer Science - Machine Learning ,General Computer Science ,Active learning (machine learning) ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Binary number ,Machine Learning (stat.ML) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,Image (mathematics) ,Statistics - Machine Learning ,Robustness (computer science) ,020204 information systems ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Baseline (configuration management) ,Training set ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Electrical Engineering and Systems Science - Image and Video Processing ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
Active Learning methods create an optimized labeled training set from unlabeled data. We introduce a novel Online Active Deep Learning method for Medical Image Analysis. We extend our MedAL active learning framework to present new results in this paper. Our novel sampling method queries the unlabeled examples that maximize the average distance to all training set examples. Our online method enhances performance of its underlying baseline deep network. These novelties contribute significant performance improvements, including improving the model's underlying deep network accuracy by 6.30%, using only 25% of the labeled dataset to achieve baseline accuracy, reducing backpropagated images during training by as much as 67%, and demonstrating robustness to class imbalance in binary and multi-class tasks., Comment: Code: https://github.com/adgaudio/o-medal ; Accepted and published by Wiley Journal of Pattern Recognition and Knowledge Discovery ; Journal URL: https://doi.org/10.1002/widm.1353
- Published
- 2020
76. Automatic classification of retinal blood vessels based on multilevel thresholding and graph propagation
- Author
-
Aurélio Campilho, Ana Maria Mendonça, and Beatriz Remeseiro
- Subjects
Retinal blood vessels ,Computer science ,business.industry ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Clinical routine ,Computer Graphics and Computer-Aided Design ,Thresholding ,Retinal image ,Computer graphics ,Clinical diagnosis ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Predictive biomarker - Abstract
Several systemic diseases affect the retinal blood vessels, and thus, their assessment allows an accurate clinical diagnosis. This assessment entails the estimation of the arteriolar-to-venular ratio (AVR), a predictive biomarker of cerebral atrophy and cardiovascular events in adults. In this context, different automatic and semiautomatic image-based approaches for artery/vein (A/V) classification and AVR estimation have been proposed in the literature, to the point of having become a hot research topic in the last decades. Most of these approaches use a wide variety of image properties, often redundant and/or irrelevant, requiring a training process that limits their generalization ability when applied to other datasets. This paper presents a new automatic method for A/V classification that just uses the local contrast between blood vessels and their surrounding background, computes a graph that represents the vascular structure, and applies a multilevel thresholding to obtain a preliminary classification. Next, a novel graph propagation approach was developed to obtain the final A/V classification and to compute the AVR. Our approach has been tested on two public datasets (INSPIRE and DRIVE), obtaining high classification accuracy rates, especially in the main vessels, and AVR ratios very similar to those provided by human experts. Therefore, our fully automatic method provides the reliable results without any training step, which makes it suitable for use with different retinal image datasets and as part of any clinical routine.
- Published
- 2020
77. A Multi-dataset Approach for DME Risk Detection in Eye Fundus Images
- Author
-
Luís Mendonça, Ana Maria Mendonça, Susana Penas, Carolina Maia, Catarina Carvalho, Ângela Carneiro, Aurélio Campilho, and João Pedrosa
- Subjects
medicine.diagnostic_test ,Artificial neural network ,Computer science ,business.industry ,Diabetic macular edema ,Pattern recognition ,Residual ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Optical coherence tomography ,Hard exudates ,medicine ,Risk detection ,Artificial intelligence ,Risk assessment ,business ,030217 neurology & neurosurgery - Abstract
Diabetic macular edema is a leading cause of visual loss for patients with diabetes. While diagnosis can only be performed by optical coherence tomography, diabetic macular edema risk assessment is often performed in eye fundus images in screening scenarios through the detection of hard exudates. Such screening scenarios are often associated with large amounts of data, high costs and high burden on specialists, motivating then the development of methodologies for automatic diabetic macular edema risk prediction. Nevertheless, significant dataset domain bias, due to different acquisition equipment, protocols and/or different populations can have significantly detrimental impact on the performance of automatic methods when transitioning to a new dataset, center or scenario. As such, in this study, a method based on residual neural networks is proposed for the classification of diabetic macular edema risk. This method is then validated across multiple public datasets, simulating the deployment in a multi-center setting and thereby studying the method’s generalization capability and existing dataset domain bias. Furthermore, the method is tested on a private dataset which more closely represents a realistic screening scenario. An average area under the curve across all public datasets of 0.891 ± 0.013 was obtained with a ResNet50 architecture trained on a limited amount of images from a single public dataset (IDRiD). It is also shown that screening scenarios are significantly more challenging and that training across multiple datasets leads to an improvement of performance (area under the curve of 0.911 ± 0.009).
- Published
- 2020
78. Optic Disc and Fovea Detection in Color Eye Fundus Images
- Author
-
Aurélio Campilho, Tânia Melo, Teresa Araújo, and Ana Maria Mendonça
- Subjects
Jaccard index ,business.industry ,Computer science ,Fundus (eye) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,Optic disc segmentation ,medicine ,Eye disorder ,Computer vision ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Optic disc - Abstract
The optic disc (OD) and the fovea are relevant landmarks in fundus images. Their localization and segmentation can facilitate the detection of some retinal lesions and the assessment of their importance to the severity and progression of several eye disorders. Distinct methodologies have been developed for detecting these structures, mainly based on color and vascular information. The methodology herein described combines the entropy of the vessel directions with the image intensities for finding the OD center and uses a sliding band filter for segmenting the OD. The fovea center corresponds to the darkest point inside a region defined from the OD position and radius. Both the Messidor and the IDRiD datasets are used for evaluating the performance of the developed methods. In the first one, a success rate of 99.56% and 100.00% are achieved for OD and fovea localization. Regarding the OD segmentation, the mean Jaccard index and Dice’s coefficient obtained are 0.87 and 0.94, respectively. The proposed methods are also amongst the top-3 performing solutions submitted to the IDRiD online challenge.
- Published
- 2020
79. A robust anisotropic edge detection method for carotid ultrasound image processing
- Author
-
Ana Domingues, Elsa Azevedo, Aurélio Campilho, José Rouco, and Catarina Carvalho
- Subjects
business.industry ,Orientation (computer vision) ,Computer science ,Quantitative Biology::Tissues and Organs ,Gaussian ,0206 medical engineering ,Regular polygon ,Image processing ,02 engineering and technology ,030204 cardiovascular system & hematology ,Edge (geometry) ,020601 biomedical engineering ,Edge detection ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Feature (computer vision) ,symbols ,General Earth and Planetary Sciences ,Segmentation ,Computer vision ,Artificial intelligence ,business ,General Environmental Science - Abstract
A new approach for robust edge detection on B-mode ultrasound images of the carotid artery is proposed in this paper. The proposed method uses anisotropic Gaussian derivative filters along with non-maximum suppression over the overall artery wall orientation in local regions. The anisotropic filters allow using a wider integration scale along the edges while preserving the edge location precision. They also perform edge continuation, resulting in the connection of isolated edge points along linear segments, which is a valuable feature for the segmentation of the artery wall layers. However, this usually results in false edges being detected near convex contours and isolated points. The use of non-maximum suppression over pooled local orientations is proposed to solve this issue. Experimental results are provided to demonstrate that the proposed edge detector outperforms other common methods in the detection of the lumen-intima and media-adventia layer interfaces of the carotid vessel walls. Additionally, the resulting edges are more continuous and precisely located.
- Published
- 2018
80. A Weakly-Supervised Framework for Interpretable Diabetic Retinopathy Detection on Retinal Images
- Author
-
Adrian Galdran, Aurélio Campilho, Asim Smailagic, and Pedro Alves Costa
- Subjects
General Computer Science ,Computer science ,Feature extraction ,Context (language use) ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,diabetic retinopathy detection ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,General Materials Science ,retinal image analysis ,Interpretability ,Retina ,Receiver operating characteristic ,Contextual image classification ,Pixel ,business.industry ,Multiple instance learning ,General Engineering ,bag of visual words ,Pattern recognition ,Retinal ,Diabetic retinopathy ,medicine.disease ,Visualization ,medicine.anatomical_structure ,chemistry ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 - Abstract
Diabetic retinopathy (DR) detection is a critical retinal image analysis task in the context of early blindness prevention. Unfortunately, in order to train a model to accurately detect DR based on the presence of different retinal lesions, typically a dataset with medical expert's annotations at the pixel level is needed. In this paper, a new methodology based on the multiple instance learning (MIL) framework is developed in order to overcome this necessity by leveraging the implicit information present on annotations made at the image level. Contrary to previous MIL-based DR detection systems, the main contribution of the proposed technique is the joint optimization of the instance encoding and the image classification stages. In this way, more useful mid-level representations of pathological images can be obtained. The explainability of the model decisions is further enhanced by means of a new loss function enforcing appropriate instance and mid-level representations. The proposed technique achieves comparable or better results than other recently proposed methods, with 90% area under the receiver operating characteristic curve (AUC) on Messidor, 93% AUC on DR1, and 96% AUC on DR2, while improving the interpretability of the produced decisions.
- Published
- 2018
81. A multi-task CNN approach for lung nodule malignancy classification and characterization
- Author
-
Aurélio Campilho, Filippo Schiavo, Carlos Abreu Ferreira, Sónia Marques, Antonio José Ledo Alves da Cunha, and João Pedrosa
- Subjects
medicine.medical_specialty ,Lung ,Computer science ,General Engineering ,Cancer ,Nodule (medicine) ,Malignancy ,medicine.disease ,Computer Science Applications ,Task (project management) ,Clinical Practice ,medicine.anatomical_structure ,Artificial Intelligence ,medicine ,Radiology ,medicine.symptom ,Lung cancer ,Lung cancer screening - Abstract
Lung cancer is the type of cancer with highest mortality worldwide. Low-dose computerized tomography is the main tool used for lung cancer screening in clinical practice, allowing the visualization of lung nodules and the assessment of their malignancy. However, this evaluation is a complex task and subject to inter-observer variability, which has fueled the need for computer-aided diagnosis systems for lung nodule malignancy classification. While promising results have been obtained with automatic methods, it is often not straightforward to determine which features a given model is basing its decisions on and this lack of explainability can be a significant stumbling block in guaranteeing the adoption of automatic systems in clinical scenarios. Though visual malignancy assessment has a subjective component, radiologists strongly base their decision on nodule features such as nodule spiculation and texture, and a malignancy classification model should thus follow the same rationale. As such, this study focuses on the characterization of lung nodules as a means for the classification of nodules in terms of malignancy. For this purpose, different model architectures for nodule characterization are proposed and compared, with the final goal of malignancy classification. It is shown that models that combine direct malignancy prediction with specific branches for nodule characterization have a better performance than the remaining models, achieving an Area Under the Curve of 0.783. The most relevant features for malignancy classification according to the model were lobulation, spiculation and texture, which is found to be in line with current clinical practice.
- Published
- 2021
82. Segmentation of gynaecological ultrasound images using different U-Net based approaches
- Author
-
Jorge Silva, Catarina Carvalho, Aurélio Campilho, Sónia Marques, Carla Peixoto, Duarte Pignatelli, and Jorge Beires
- Subjects
Spatial contextual awareness ,business.industry ,Computer science ,Ultrasound ,Cancer ,Pattern recognition ,Ovary ,02 engineering and technology ,Image segmentation ,Malignancy ,medicine.disease ,03 medical and health sciences ,0302 clinical medicine ,Transvaginal ultrasound ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business ,Ovarian cancer - Abstract
Ovarian cancer is one of the most commonly occurring cancer in women. Transvaginal ultrasound is used as a screening test to detect the presence of tumors but, for specific types of ovarian tumors, malignancy can only be asserted through surgery. An automatic method to perform the detection and malignancy assessment of these tumours is thus necessary to prevent unnecessary oophorectomies.This work explores the U-Net’s architecture and investigates the selection of different hyperparameters for the ovary and the ovarian follicles segmentation. The effect of applying different post-processing methods on beam-formed radio-frequency (BRF) data is also investigated.Results show that models trained only with BRF data have the worst performance. On the other hand, the combination of B-mode with BRF data performs better for ovary segmentation. As for the hyperparameter study, results show that the U-Net with 4 levels is the architecture with the worst performance. This shows that to achieve better performance in the segmentation of ovarian structures, it is important to select an architecture that takes into account the spatial context of the regions of interest. It is also possible to conclude that the method used to analyse BRF data should be designed to take advantage of the fine-resolution of BRF data.
- Published
- 2019
83. LNDetector: A Flexible Gaze Characterisation Collaborative Platform for Pulmonary Nodule Screening
- Author
-
Isabel Ramos, Guilherme Aresta, Joao Rebelo, Antonio José Ledo Alves da Cunha, Aurélio Campilho, Eduardo Negrão, and João Pedrosa
- Subjects
Visual search ,Nodule detection ,medicine.diagnostic_test ,business.industry ,Computer science ,Computed tomography ,Machine learning ,computer.software_genre ,Gaze ,ComputingMethodologies_PATTERNRECOGNITION ,Computer-aided diagnosis ,Pulmonary nodule ,medicine ,Classification methods ,Segmentation ,Artificial intelligence ,business ,computer - Abstract
Lung cancer is the deadliest type of cancer worldwide and late detection is one of the major factors for the low survival rate of patients. Low dose computed tomography has been suggested as a potential early screening tool but manual screening is costly, time-consuming and prone to interobserver variability. This has fueled the development of automatic methods for the detection, segmentation and characterisation of pulmonary nodules but its application to the clinical routine is challenging. In this study, a platform for the development, deployment and testing of pulmonary nodule computer-aided strategies is presented: LNDetector. LNDetector integrates image exploration and nodule annotation tools as well as advanced nodule detection, segmentation and classification methods and gaze characterisation. Different processing modules can easily be implemented or replaced to test their efficiency in clinical environments and the use of gaze analysis allows for the development of collaborative strategies. The potential use of this platform is shown through a combination of visual search, gaze characterisation and automatic nodule detection tools for an efficient and collaborative computer-aided strategy for pulmonary nodule screening.
- Published
- 2019
84. Automatic Lung Reference Model
- Author
-
Márcio Rodrigues, João Pedrosa, Isabel Ramos, Aurélio Campilho, André Carvalho, Patrícia Leitão, Joao Rebelo, Antonio José Ledo Alves da Cunha, Marlene Machado, Carlos A. Ferreira, and Eduardo Negrão
- Subjects
Standard anatomical position ,medicine.medical_specialty ,Lung ,medicine.anatomical_structure ,medicine.diagnostic_test ,business.industry ,Medicine ,Computed tomography ,Radiology ,business ,Lung cancer ,medicine.disease ,Reference model - Abstract
The lung cancer diagnosis is based on the search of lung nodules. Besides its characterization, it is also common to register the anatomical position of these findings. Even though computed-aided diagnosis systems tend to help in these tasks, there is still lacking a complete system that can qualitatively label the nodules in lung regions. In this way, this paper proposes an automatic lung reference model to facilitate the report of nodules between computed-aided diagnosis systems and the radiologist, and among radiologists. The model was applied to 115 computed tomography scans with manually and automatically segmented lobes, and the obtained sectors’ variability was evaluated. As the sectors average variability within lobes is less or equal to 0.14, the model can be a good way to promote the report of lung nodules.
- Published
- 2019
85. EyeWeS: Weakly Supervised Pre-Trained Convolutional Neural Networks for Diabetic Retinopathy Detection
- Author
-
Teresa Araújo, Adrian Galdran, Guilherme Aresta, Pedro Alves Costa, Aurélio Campilho, Ana Maria Mendonça, and Asim Smailagic
- Subjects
Blindness ,Receiver operating characteristic ,business.industry ,Computer science ,Pattern recognition ,02 engineering and technology ,Diabetic retinopathy ,medicine.disease ,Convolutional neural network ,03 medical and health sciences ,0302 clinical medicine ,030221 ophthalmology & optometry ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Diabetic Retinopathy (DR) is one of the leading causes of preventable blindness in the developed world. With the increasing number of diabetic patients there is a growing need of an automated system for DR detection. We propose Eye WeS, a method that not only detects DR in eye fundus images but also pinpoints the regions of the image that contain lesions, while being trained with image labels only. We show that it is possible to convert any pre-trained convolutional neural network into a weakly-supervised model while increasing their performance and efficiency. EyeWeS improved the results of Inception V3 from 94.9% Area Under the Receiver Operating Curve (AUC) to 95.8% AUC while maintaining only approximately 5% of the Inception V3's number of parameters. The same model is able to achieve 97.1% AUC in a cross-dataset experiment.
- Published
- 2019
86. Analysis of the performance of specialists and an automatic algorithm in retinal image quality assessment
- Author
-
Angela Carneiro, Catarina Carvalho, Teresa Araújo, Aurélio Campilho, Carolina Maia, Susana Penas, Diego S. Wanderley, and Ana Maria Mendonça
- Subjects
Future studies ,Quality assessment ,Computer science ,Image quality ,media_common.quotation_subject ,0402 animal and dairy science ,Retinal ,04 agricultural and veterinary sciences ,040201 dairy & animal science ,Retinal image ,chemistry.chemical_compound ,chemistry ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,Quality (business) ,Sensitivity (control systems) ,Algorithm ,media_common - Abstract
This study describes a novel dataset with retinal image quality annotation, defined by three different retinal experts, and presents an inter-observer analysis for quality assessment that can be used as gold-standard for future studies. A state-of-the-art algorithm for retinal image quality assessment is also analysed and compared against the specialists performance. Results show that, for 71% of the images present in the dataset, the three experts agree on the given image quality label. The results obtained for accuracy, specificity and sensitivity when comparing one expert against another were in the ranges [83.0 − 85.2]%, [72.7 − 92.9]% and [80.0 − 94.7]%, respectively. The evaluated automatic quality assessment method, despite not being trained on the novel dataset, presents a performance which is within inter-observer variability.
- Published
- 2019
87. Wide Residual Network for Lung-Rads™ Screening Referral
- Author
-
Guilherme Aresta, Aurélio Campilho, Antonio José Ledo Alves da Cunha, Carlos Arthur Ferreira, and Ana Maria Mendonça
- Subjects
medicine.medical_specialty ,Lung ,medicine.diagnostic_test ,Referral ,business.industry ,0402 animal and dairy science ,Computed tomography ,04 agricultural and veterinary sciences ,respiratory system ,Malignancy ,medicine.disease ,Residual ,040201 dairy & animal science ,respiratory tract diseases ,medicine.anatomical_structure ,Binary classification ,040103 agronomy & agriculture ,medicine ,Screening method ,0401 agriculture, forestry, and fisheries ,Radiology ,business ,Lung cancer - Abstract
Lung cancer has an increasing preponderance in worldwide mortality, demanding for the development of efficient screening methods. With this in mind, a binary classification method using Lung-RADSTM guidelines to warn changes in the screening management is proposed. First, having into account the lack of public datasets for this task, the lung nodules in the LIDC-IDRI dataset were re-annotated to include a Lung-RADSTM-based referral label. Then, a wide residual network is used for automatically assessing lung nodules in 3D chest computed tomography exams. Unlike the standard malignancy prediction approaches, the proposed method avoids the need to segment and characterize lung nodules, and instead directly defines if a patient should be submitted for further lung cancer tests. The system achieves a nodule-wise accuracy of 0.87±0.02.
- Published
- 2019
88. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge
- Author
-
Mohammed Safwan, Debdoot Sheet, Bin Zheng, Xin Ji, Varghese Alex, Qinhao Chu, Tian Bo Wu, Manesh Kokare, Ling Dai, Fengyan Wang, Li Cheng, Linsheng He, Rachana Sathish, Junyan Wu, Jianzong Wang, Xinhui Liu, Ting Zhou, Ruogu Fang, Zhongyu Li, Janos Toth, Sang Hyuk Jung, Yeong Chan Lee, Andras Hajdu, Girish Deshmukh, Balazs Harangi, Luca Giancardo, Tânia Melo, Gwenole Quellec, Xiaodan Sui, Pengcheng Li, Aurélio Campilho, Jing Xiao, Jaemin Son, Samiksha Pachade, Gopichandh Danala, Sai Saketh Chennamsetty, Sanyuan Zhang, Baocai Yin, Yunzhi Wang, Dinggang Shen, Teresa Araújo, Xingzheng Lyu, Xiaolong Li, Lihong Liu, Ana Maria Mendonça, Shen Yaxin, Yoon-Ho Choi, Liangxin Gao, Bin Sheng, Fabrice Meriaudeau, Woong Bae, Avinash Kori, Yuanjie Zheng, Oindrila Saha, Agnes Baran, Prasanna Porwal, and Shaoting Zhang
- Subjects
medicine.medical_specialty ,Computer science ,Population ,Datasets as Topic ,Health Informatics ,030218 nuclear medicine & medical imaging ,Pattern Recognition, Automated ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Image Interpretation, Computer-Assisted ,Medical imaging ,medicine ,Photography ,Humans ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Segmentation ,Generalizability theory ,Diagnosis, Computer-Assisted ,education ,Grading (tumors) ,education.field_of_study ,Diabetic Retinopathy ,Radiological and Ultrasound Technology ,business.industry ,Deep learning ,Diabetic retinopathy ,medicine.disease ,Computer Graphics and Computer-Aided Design ,Common cause and special cause ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
- Published
- 2019
89. Artificial intelligence and deep learning in retinal image analysis
- Author
-
Aurélio Campilho, Adrian Galdran, Adam B. Cohen, Pedro Alves Costa, and Philippe Burlina
- Subjects
Computer science ,business.industry ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Retinal ,Convolutional neural network ,Autoencoder ,chemistry.chemical_compound ,Discriminative model ,chemistry ,Medical imaging ,Segmentation ,Artificial intelligence ,business ,Host (network) - Abstract
Our goal in this chapter is to describe the recent application of deep learning and artificial intelligence (AI) techniques to retinal image analysis. Automatic retinal image analysis (ARIA) is a complex task that has significant applications for diagnostic purposes for a host of retinal, neurological, and vascular diseases. A number of approaches for the automatic analysis of the retinal images have been studied for the past two decades but the recent success of deep learning (DL) for a range of computer vision and image analysis tasks has now permeated medical imaging and ARIA. Since 2016, major improvements were reported using DL discriminative methods (deep convolutional neural networks or autoencoder convolutional networks), and generative methods, in combination with other image analysis methods, that have demonstrated the ability of algorithms to perform on par with ophthalmologists and retinal specialists, for tasks such as automated classification, diagnostics, and segmentation. We review these recent developments in this chapter.
- Published
- 2019
90. Learned Pre-processing for Automatic Diabetic Retinopathy Detection on Eye Fundus Images
- Author
-
Adrian Galdran, Alex Gaudio, Asim Smailagic, Aurélio Campilho, Anupma Sharan, and Pedro Alves Costa
- Subjects
education.field_of_study ,genetic structures ,Blindness ,business.industry ,Computer science ,Color correction ,Network on ,Population ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Diabetic retinopathy ,medicine.disease ,01 natural sciences ,Task (project management) ,010309 optics ,0103 physical sciences ,Shadow ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Preprocessor ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,education ,business - Abstract
Diabetic Retinopathy is the leading cause of blindness in the working-age population of the world. The main aim of this paper is to improve the accuracy of Diabetic Retinopathy detection by implementing a shadow removal and color correction step as a preprocessing stage from eye fundus images. For this, we rely on recent findings indicating that application of image dehazing on the inverted intensity domain amounts to illumination compensation. Inspired by this work, we propose a Shadow Removal Layer that allows us to learn the pre-processing function for a particular task. We show that learning the pre-processing function improves the performance of the network on the Diabetic Retinopathy detection task.
- Published
- 2019
91. Contributors
- Author
-
Samaneh Abbasi-Sureshjani, Bashir Al-Diri, Stefanos Apostolopoulos, Antonis A. Argyros, Lucia Ballerini, Sarah A. Barman, Erik J. Bekkers, Hrvoje Bogunović, Adrian Bradu, Catey Bunce, Philip I. Burgess, Philippe Burlina, Francesco Calivá, Aurélio Campilho, Guillem Carles, Jun Cheng, Li Cheng, Carol Yim-lui Cheung, Piotr Chudzik, Carlos Ciller, Adam Cohen, Pedro Costa, Gabriela Czanner, Behdad Dashtbozorg, Alexander Doney, Huazhu Fu, Adrian Galdran, Jakob Grauslund, Zaiwang Gu, Pedro Guimarães, Maged Habib, Andrew R. Harvey, Carlos Hernandez-Matas, Stephen Hogg, Wynne Hsu, Zhihong Jewel Hu, Fan Huang, Yan Huang, Emily R. Jefferson, Ryo Kawasaki, Le Van La, Mong Li Lee, Huiqi Li, Xing Li, Zhengguo Li, Gilbert Lim, Jiang Liu, Xuan Liu, Xingzheng Lyu, Tom MacGillivray, Sarah McGrory, Andrew McNeil, Muthu Rama Krishnan Mookiah, Giovanni Ometto, Christopher G. Owen, Adrian Podoleanu, Alicja R. Rudnicka, Alfredo Ruggeri, Srinivas Reddy Sadda, Ursula Schmidt-Erfurth, Raphael Sznitman, Bart M. ter Haar Romeny, Daniel Shu Wei Ting, Emanuele Trucco, Wolf-Dieter Vogl, Sebastian M. Waldstein, Lei Wang, Roshan A. Welikala, Jeffrey Wigdahl, Bryan M. Williams, Sebastian Wolf, Damon Wing Kee Wong, Posey Po-yin Wong, Tien Yin Wong, Yanwu Xu, Dalu Yang, Yehui Yang, Xenophon Zabulis, Sandro De Zanet, Jiong Zhang, He Zhao, and Yalin Zheng
- Published
- 2019
92. Convolutional Neural Network Architectures for Texture Classification of Pulmonary Nodules
- Author
-
Aurélio Campilho, Carlos A. Ferreira, Antonio José Ledo Alves da Cunha, and Ana Maria Mendonça
- Subjects
Relation (database) ,Computer science ,business.industry ,Deep learning ,Early detection ,Nodule (medicine) ,Pattern recognition ,Context (language use) ,Convolutional neural network ,Texture (geology) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,medicine ,Artificial intelligence ,medicine.symptom ,business - Abstract
Lung cancer is one of the most common causes of death in the world. The early detection of lung nodules allows an appropriate follow-up, timely treatment and potentially can avoid greater damage in the patient health. The texture is one of the nodule characteristics that is correlated with the malignancy. We developed convolutional neural network architectures to classify automatically the texture of nodules into the non-solid, part-solid and solid classes. The different architectures were tested to determine if the context, the number of slices considered as input and the relation between slices influence on the texture classification performance. The architecture that obtained better performance took into account different scales, different rotations and the context of the nodule, obtaining an accuracy of 0.833 ± 0.041.
- Published
- 2019
93. CATARACTS: Challenge on automatic tool annotation for cataRACT surgery
- Author
-
Guilherme Aresta, Chandan Panda, Gwenole Quellec, Senthil Ramamurthy, Xiaowei Hu, Adrian Galdran, Navdeep Dahiya, Fenqiang Zhao, Pheng-Ann Heng, Evangello Flouty, William G. Macready, Danail Stoyanov, Sailesh Conjeti, Anirban Mukhopadhyay, Sabrina Dill, Stefan Zachow, Jogundas Armaitis, Mathieu Lamard, Pedro Alves Costa, Shunren Xia, Jonas Prellberg, Manish Sahu, Satoshi Kondo, Pierre-Henri Conze, Muneer Ahmad Dedmari, Chenhui Qiu, Arash Vahdat, Gabija Maršalkaitė, Zhengbing Bian, Jonas Bialopetravičius, Duc My Vo, Soumali Roychowdhury, Béatrice Cochener, Odysseas Zisimopoulos, Teresa Araújo, Sang-Woong Lee, Hassan Al Hajj, Aurélio Campilho, Institut National de la Santé et de la Recherche Médicale (INSERM), Université de Brest (UBO), Département lmage et Traitement Information (IMT Atlantique - ITI), IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Laboratoire de Traitement de l'Information Medicale (LaTIM), Université de Brest (UBO)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre Hospitalier Régional Universitaire de Brest (CHRU Brest)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut Brestois Santé Agro Matière (IBSAM), The Chinese University of Hong Kong [Hong Kong], Technische Universität Munchen - Université Technique de Munich [Munich, Allemagne] (TUM), Service d'ophtalmologie [Brest], and Université de Brest (UBO)-Centre Hospitalier Régional Universitaire de Brest (CHRU Brest)
- Subjects
Decision support system ,Computer science ,medicine.medical_treatment ,media_common.quotation_subject ,instrumentation [Cataract Extraction] ,Psychological intervention ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video Recording ,Health Informatics ,Context (language use) ,Cataract Extraction ,02 engineering and technology ,03 medical and health sciences ,Annotation ,0302 clinical medicine ,Deep Learning ,Cataracts ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,[INFO.INFO-IM]Computer Science [cs]/Medical Imaging ,Humans ,Radiology, Nuclear Medicine and imaging ,Quality (business) ,ddc:610 ,media_common ,Radiological and Ultrasound Technology ,business.industry ,Deep learning ,Cataract surgery ,medicine.disease ,Surgical Instruments ,Computer Graphics and Computer-Aided Design ,Data science ,030221 ophthalmology & optometry ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithms - Abstract
International audience; Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparo-scopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected,-H. Conze et al. / Medical Image Analysis 52 (2019) 24-41 25 the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.
- Published
- 2019
94. End-to-End Ovarian Structures Segmentation
- Author
-
Jorge Beires, Diego S. Wanderley, Carla Peixoto, Jorge Silva, Catarina Carvalho, Duarte Pignatelli, Aurélio Campilho, and Ana Domingues
- Subjects
Similarity (geometry) ,Computer science ,business.industry ,0206 medical engineering ,Pattern recognition ,Image processing ,02 engineering and technology ,020601 biomedical engineering ,Convolutional neural network ,Regularization (mathematics) ,Field (computer science) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,End-to-end principle ,Segmentation ,Artificial intelligence ,Medical diagnosis ,business - Abstract
The segmentation and characterization of the ovarian structures are important tasks in gynecological and reproductive medicine. Ultrasound imaging is typically used for the medical diagnosis within this field but the understanding of the images can be difficult due to their characteristics. Furthermore, the complexity of ultrasound data may lead to a heavy image processing, which makes the application of classical methods of computer vision difficult. This work presents the first supervised fully convolutional neural network (fCNN) for the automatic segmentation of ovarian structures in B-mode ultrasound images. Due to the small dataset available, only 57 images were used for training. In order to overcome this limitation, several regularization techniques were used and are discussed in this paper. The experiments show the ability of the fCNN to learn features to distinguish ovarian structures, achieving a Dice similarity coefficient (DSC) of 0.855 for the segmentation of the stroma and a DSC of 0.955 for the follicles. When compared with a semi-automatic commercial application for follicle segmentation, the proposed fCNN achieved an average improvement of 19%.
- Published
- 2019
95. Deep Learning Approaches for Gynaecological Ultrasound Image Segmentation: A Radio-Frequency vs B-mode Comparison
- Author
-
Duarte Pignatelli, Sónia Marques, Carla Peixoto, Jorge Beires, Catarina Carvalho, Jorge Silva, and Aurélio Campilho
- Subjects
Computer science ,business.industry ,Deep learning ,Ultrasound ,Ovary ,Pattern recognition ,02 engineering and technology ,Image segmentation ,medicine.disease ,Convolutional neural network ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,Ultrasound image segmentation ,Ovarian cancer ,business ,030217 neurology & neurosurgery - Abstract
Ovarian cancer is one of the pathologies with the worst prognostic in adult women and it has a very difficult early diagnosis. Clinical evaluation of gynaecological ultrasound images is performed visually, and it is dependent on the experience of the medical doctor. Besides the dependency on the specialists, the malignancy of specific types of ovarian tumors cannot be asserted until their surgical removal. This work explores the use of ultrasound data for the segmentation of the ovary and the ovarian follicles, using two different convolutional neural networks, a fully connected residual network and a U-Net, with a binary and multi-class approach. Five different types of ultrasound data (from beam-formed radio-frequency to brightness mode) were used as input. The best performance was obtained using B-mode, for both ovary and follicles segmentation. No significant differences were found between the two convolutional neural networks. The use of the multi-class approach was beneficial as it provided the model information on the spatial relation between follicles and the ovary. This study demonstrates the suitability of combining convolutional neural networks with beam-formed radio-frequency data and with brightness mode data for segmentation of ovarian structures. Future steps involve the processing of pathological data and investigation of biomarkers of pathological ovaries.
- Published
- 2019
96. An unsupervised metaheuristic search approach for segmentation and volume measurement of pulmonary nodules in lung CT scans
- Author
-
Elham Shakibapour, Ana Maria Mendonça, Antonio José Ledo Alves da Cunha, Guilherme Aresta, and Aurélio Campilho
- Subjects
0209 industrial biotechnology ,Feature data ,Computer science ,business.industry ,Feature vector ,General Engineering ,Pattern recognition ,Nodule (medicine) ,02 engineering and technology ,Malignancy ,medicine.disease ,Computer Science Applications ,Lesion ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,medicine.symptom ,business ,Cluster analysis ,Volume (compression) - Abstract
This paper proposes a new methodology to automatically segment and measure the volume of pulmonary nodules in lung computed tomography (CT) scans. Estimating the malignancy likelihood of a pulmonary nodule based on lesion characteristics motivated the development of an unsupervised pulmonary nodule segmentation and volume measurement as a preliminary stage for pulmonary nodule characterization. The idea is to optimally cluster a set of feature vectors composed by intensity and shape-related features in a given feature data space extracted from a pre-detected nodule. For that purpose, a metaheuristic search based on evolutionary computation is used for clustering the corresponding feature vectors. The proposed method is simple, unsupervised and is able to segment different types of nodules in terms of location and texture without the need for any manual annotation. We validate the proposed segmentation and volume measurement on the Lung Image Database Consortium and Image Database Resource Initiative – LIDC-IDRI dataset. The first dataset is a group of 705 solid and sub-solid (assessed as part-solid and non-solid) nodules located in different regions of the lungs, and the second, more challenging, is a group of 59 sub-solid nodules. The average Dice scores of 82.35% and 71.05% for the two datasets show the good performance of the segmentation proposal. Comparisons with previous state-of-the-art techniques also show acceptable and comparable segmentation results. The volumes of the segmented nodules are measured via ellipsoid approximation. The correlation and statistical significance between the measured volumes of the segmented nodules and the ground-truth are obtained by Pearson correlation coefficient value, obtaining an R-value ≥ 92.16% with a significance level of 5%.
- Published
- 2019
97. MedAL: Accurate and Robust Deep Active Learning for Medical Image Analysis
- Author
-
Aurélio Campilho, Hae Young Noh, Pedro Alves Costa, Devesh Walawalkar, Pei Zhang, Adrian Galdran, Susu Xu, Jonathon Fagert, Mostafa Mirshekari, Kartik Khandelwal, and Asim Smailagic
- Subjects
Medal ,Training set ,Computer science ,Entropy (statistical thermodynamics) ,business.industry ,Feature vector ,Deep learning ,Pattern recognition ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Entropy (classical thermodynamics) ,0302 clinical medicine ,020204 information systems ,Active learning ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,Artificial intelligence ,Entropy (energy dispersal) ,business ,Entropy (arrow of time) ,Entropy (order and disorder) - Abstract
Deep learning models have been successfully used in medical image analysis problems but they require a large amount of labeled images to obtain good performance. However, such large labeled datasets are costly to acquire. Active learning techniques can be used to minimize the number of required training labels while maximizing the model's performance. In this work, we propose a novel sampling method that queries the unlabeled examples that maximize the average distance to all training set examples in a learned feature space. We then extend our sampling method to define a better initial training set, without the need for a trained model, by using Oriented FAST and Rotated BRIEF (ORB) feature descriptors. We validate MedAL on 3 medical image datasets and show that our method is robust to different dataset properties. MedAL is also efficient, achieving 80% accuracy on the task of Diabetic Retinopathy detection using only 425 labeled images, corresponding to a 32% reduction in the number of required labeled examples compared to the standard uncertainty sampling technique, and a 40% reduction compared to random sampling.
- Published
- 2018
98. Image Analysis and Recognition : 17th International Conference, ICIAR 2020, Póvoa De Varzim, Portugal, June 24–26, 2020, Proceedings, Part II
- Author
-
Aurélio Campilho, Fakhri Karray, Zhou Wang, Aurélio Campilho, Fakhri Karray, and Zhou Wang
- Subjects
- Database management, Image processing—Digital techniques, Computer vision, Education—Data processing, Social sciences—Data processing, Pattern recognition systems, Machine learning
- Abstract
This two-volume set LNCS 12131 and LNCS 12132 constitutes the refereed proceedings of the 17th International Conference on Image Analysis and Recognition, ICIAR 2020, held in Póvoa de Varzim, Portugal, in June 2020.The 54 full papers presented together with 15 short papers were carefully reviewed and selected from 123 submissions. The papers are organized in the following topical sections: image processing and analysis; video analysis; computer vision; 3D computer vision; machine learning; medical image and analysis; analysis of histopathology images; diagnosis and screening of ophthalmic diseases; and grand challenge on automatic lung cancer patient management.Due to the corona pandemic, ICIAR 2020 was held virtually only.
- Published
- 2020
99. Image Analysis and Recognition : 16th International Conference, ICIAR 2019, Waterloo, ON, Canada, August 27–29, 2019, Proceedings, Part II
- Author
-
Fakhri Karray, Aurélio Campilho, Alfred Yu, Fakhri Karray, Aurélio Campilho, and Alfred Yu
- Subjects
- Computer vision, Pattern recognition systems, Artificial intelligence, Computer networks, Medical informatics
- Abstract
This two-volume set LNCS 11662 and 11663 constitutes the refereed proceedings of the 16th International Conference on Image Analysis and Recognition, ICIAR 2019, held in Waterloo, ON, Canada, in August 2019. The 58 full papers presented together with 24 short and 2 poster papers were carefully reviewed and selected from 142 submissions. The papers are organized in the following topical sections: Image Processing; Image Analysis; Signal Processing Techniques for Ultrasound Tissue Characterization and Imaging in Complex Biological Media; Advances in Deep Learning; Deep Learning on the Edge; Recognition; Applications; Medical Imaging and Analysis Using Deep Learning and Machine Intelligence; Image Analysis and Recognition for Automotive Industry; Adaptive Methods for Ultrasound Beamforming and Motion Estimation.
- Published
- 2019
100. Hessian based approaches for 3D lung nodule segmentation
- Author
-
Luís Gonçalves, Jorge Novo, and Aurélio Campilho
- Subjects
Hessian matrix ,Computer science ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,Lung cancer ,Lung ,business.industry ,General Engineering ,Pattern recognition ,Nodule (medicine) ,medicine.disease ,Computer Science Applications ,medicine.anatomical_structure ,Computer-aided diagnosis ,symbols ,020201 artificial intelligence & image processing ,Tomography ,Artificial intelligence ,medicine.symptom ,business - Abstract
In the design of computer-aided diagnosis systems for lung cancer diagnosis, an appropriate and accurate segmentation of the pulmonary nodules in computerized tomography (CT) is one of the most relevant and difficult tasks. An accurate segmentation is crucial for the posterior measurement of nodule characteristics and for lung cancer diagnosis. This paper proposes different approaches that use Hessian-based strategies for lung nodule segmentation in chest CT scans. We propose a multiscale segmentation process that uses the central medialness adaptive principle, a Hessian-based strategy that was originally formulated for tubular extraction but it also provides good segmentation results in blob-like structures as is the case of lung nodules. We compared this proposal with a well established Hessian-based strategy that calculates the Shape Index (SI) and Curvedness (CV). We adapted the SI and CV approach for multiscale nodule segmentation. Moreover, we propose the combination of both strategies by combining the results, in order to take benefit of the advantages of both strategies. Different cases with pulmonary nodules from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database were taken and used to analyze and validate the approaches. The chest CT images present a large variability in nodule characteristics and image conditions. Our proposals provide an accurate lung nodule segmentation, similar to radiologists performance. Our Hessian-based approaches were validated with 569 solid and mostly solid nodules demonstrating that these novel strategies have good results when compared with the radiologists segmentations, providing accurate pulmonary nodule volumes for posterior characterization and appropriate diagnosis.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.