12 results on '"Conze, Pierre-Henri"'
Search Results
2. Detection of Diabetic Retinopathy Using Longitudinal
- Author
-
Zeghlache, Rachid, Conze, Pierre-Henri, Daho, Mostafa El Habib, Tadayoni, Ramin, Massin, Pascal, Cochener, Béatrice, Quellec, Gwenolé, Lamard, Mathieu, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antony, Bhavna, editor, Fu, Huazhu, editor, Lee, Cecilia S., editor, MacGillivray, Tom, editor, Xu, Yanwu, editor, and Zheng, Yalin, editor
- Published
- 2022
- Full Text
- View/download PDF
3. Deep Active Learning for Dual-View Mammogram Analysis
- Author
-
Yan, Yutong, Conze, Pierre-Henri, Lamard, Mathieu, Zhang, Heng, Quellec, Gwenolé, Cochener, Béatrice, Coatrieux, Gouenou, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lian, Chunfeng, editor, Cao, Xiaohuan, editor, Rekik, Islem, editor, Xu, Xuanang, editor, and Yan, Pingkun, editor
- Published
- 2021
- Full Text
- View/download PDF
4. Longitudinal Detection of Diabetic Retinopathy Early Severity Grade Changes Using Deep Learning
- Author
-
Yan, Yutong, Conze, Pierre-Henri, Quellec, Gwenolé, Massin, Pascale, Lamard, Mathieu, Coatrieux, Gouenou, Cochener, Béatrice, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Fu, Huazhu, editor, Garvin, Mona K., editor, MacGillivray, Tom, editor, Xu, Yanwu, editor, and Zheng, Yalin, editor
- Published
- 2021
- Full Text
- View/download PDF
5. Multi-tasking Siamese Networks for Breast Mass Detection Using Dual-View Mammogram Matching
- Author
-
Yan, Yutong, Conze, Pierre-Henri, Lamard, Mathieu, Quellec, Gwenolé, Cochener, Béatrice, Coatrieux, Gouenou, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Mingxia, editor, Yan, Pingkun, editor, Lian, Chunfeng, editor, and Cao, Xiaohuan, editor
- Published
- 2020
- Full Text
- View/download PDF
6. Hybrid Fusion of High-Resolution and Ultra-Widefield OCTA Acquisitions for the Automatic Diagnosis of Diabetic Retinopathy.
- Author
-
Li, Yihao, El Habib Daho, Mostafa, Conze, Pierre-Henri, Zeghlache, Rachid, Le Boité, Hugo, Bonnin, Sophie, Cosette, Deborah, Magazzeni, Stephanie, Lay, Bruno, Le Guilcher, Alexandre, Tadayoni, Ramin, Cochener, Béatrice, Lamard, Mathieu, and Quellec, Gwenolé
- Subjects
DIABETIC retinopathy ,RECEIVER operating characteristic curves ,OPTICAL coherence tomography ,DEEP learning - Abstract
Optical coherence tomography angiography (OCTA) can deliver enhanced diagnosis for diabetic retinopathy (DR). This study evaluated a deep learning (DL) algorithm for automatic DR severity assessment using high-resolution and ultra-widefield (UWF) OCTA. Diabetic patients were examined with 6 × 6 mm 2 high-resolution OCTA and 15 × 15 mm 2 UWF-OCTA using PLEX®Elite 9000. A novel DL algorithm was trained for automatic DR severity inference using both OCTA acquisitions. The algorithm employed a unique hybrid fusion framework, integrating structural and flow information from both acquisitions. It was trained on data from 875 eyes of 444 patients. Tested on 53 patients (97 eyes), the algorithm achieved a good area under the receiver operating characteristic curve (AUC) for detecting DR (0.8868), moderate non-proliferative DR (0.8276), severe non-proliferative DR (0.8376), and proliferative/treated DR (0.9070). These results significantly outperformed detection with the 6 × 6 mm 2 (AUC = 0.8462, 0.7793, 0.7889, and 0.8104, respectively) or 15 × 15 mm 2 (AUC = 0.8251, 0.7745, 0.7967, and 0.8786, respectively) acquisitions alone. Thus, combining high-resolution and UWF-OCTA acquisitions holds the potential for improved early and late-stage DR detection, offering a foundation for enhancing DR management and a clear path for future works involving expanded datasets and integrating additional imaging modalities. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Two-stage multi-scale breast mass segmentation for full mammogram analysis without user intervention.
- Author
-
Yan, Yutong, Conze, Pierre-Henri, Quellec, Gwenolé, Lamard, Mathieu, Cochener, Beatrice, and Coatrieux, Gouenou
- Subjects
COMPUTER-aided diagnosis ,MAMMOGRAMS ,BREAST ,MEDICAL personnel ,CANCER diagnosis ,EARLY diagnosis - Abstract
Mammography is the primary imaging modality used for early detection and diagnosis of breast cancer. X-ray mammogram analysis mainly refers to the localization of suspicious regions of interest followed by segmentation, towards further lesion classification into benign versus malignant. Among diverse types of breast abnormalities, masses are the most important clinical findings of breast carcinomas. However, manually segmenting breast masses from native mammograms is time-consuming and error-prone. Therefore, an integrated computer-aided diagnosis system is required to assist clinicians for automatic and precise breast mass delineation. In this work, we present a two-stage multi-scale pipeline that provides accurate mass contours from high-resolution full mammograms. First, we propose an extended deep detector integrating a multi-scale fusion strategy for automated mass localization. Second, a convolutional encoder-decoder network using nested and dense skip connections is employed to fine-delineate candidate masses. Unlike most previous studies based on segmentation from regions, our framework handles mass segmentation from native full mammograms without any user intervention. Trained on INbreast and DDSM-CBIS public datasets, the pipeline achieves an overall average Dice of 80.44% on INbreast test images, outperforming state-of-the-art. Our system shows promising accuracy as an automatic full-image mass segmentation system. Extensive experiments reveals robustness against the diversity of size, shape and appearance of breast masses, towards better interaction-free computer-aided diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks.
- Author
-
Conze, Pierre-Henri, Kavur, Ali Emre, Cornec-Le Gall, Emilie, Gezer, Naciye Sinem, Le Meur, Yannick, Selver, M. Alper, and Rousseau, François
- Subjects
- *
GENERATIVE adversarial networks , *DEEP learning , *COMPUTER-aided diagnosis , *COMPUTED tomography , *IMAGE analysis - Abstract
Abdominal anatomy segmentation is crucial for numerous applications from computer-assisted diagnosis to image-guided surgery. In this context, we address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning. The proposed model extends standard conditional generative adversarial networks. Additionally to the discriminator which enforces the model to create realistic organ delineations, it embeds cascaded partially pre-trained convolutional encoder-decoders as generator. Encoder fine-tuning from a large amount of non-medical images alleviates data scarcity limitations. The network is trained end-to-end to benefit from simultaneous multi-level segmentation refinements using auto-context. Employed for healthy liver, kidneys and spleen segmentation, our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes. Followed for the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge organized in conjunction with the IEEE International Symposium on Biomedical Imaging 2019, it gave us the first rank for three competition categories: liver CT, liver MR and multi-organ MR segmentation. Combining cascaded convolutional and adversarial networks strengthens the ability of deep learning pipelines to automatically delineate multiple abdominal organs, with good generalization capability. The comprehensive evaluation provided suggests that better guidance could be achieved to help clinicians in abdominal image interpretation and clinical decision making. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Towards improved breast mass detection using dual-view mammogram matching.
- Author
-
Yan, Yutong, Conze, Pierre-Henri, Lamard, Mathieu, Quellec, Gwenolé, Cochener, Béatrice, and Coatrieux, Gouenou
- Subjects
- *
COMPUTER-aided diagnosis , *MAMMOGRAMS , *CANCER diagnosis , *ARTIFICIAL neural networks , *MEDICAL personnel - Abstract
• multi-view information fusion can improve computer-aided diagnosis of breast cancer • a new deep Siamese pipeline is proposed for mass detection from dual-view mammograms • multi-tasking abilities of deep models are used to learn matching and classification • dual-view matching improves both patch classification and examination-level detection • our automatic mass detector could act as second opinion for mammogram interpretation [Display omitted] Breast cancer screening benefits from the visual analysis of multiple views of routine mammograms. As for clinical practice, computer-aided diagnosis (CAD) systems could be enhanced by integrating multi-view information. In this work, we propose a new multi-tasking framework that combines craniocaudal (CC) and mediolateral-oblique (MLO) mammograms for automatic breast mass detection. Rather than addressing mass recognition only, we exploit multi-tasking properties of deep networks to jointly learn mass matching and classification, towards better detection performance. Specifically, we propose a unified Siamese network that combines patch-level mass/non-mass classification and dual-view mass matching to take full advantage of multi-view information. This model is exploited in a full image detection pipeline based on You-Only-Look-Once (YOLO) region proposals. We carry out exhaustive experiments to highlight the contribution of dual-view matching for both patch-level classification and examination-level detection scenarios. Results demonstrate that mass matching highly improves the full-pipeline detection performance by outperforming conventional single-task schemes with 94.78% as Area Under the Curve (AUC) score and a classification accuracy of 0.8791. Interestingly, mass classification also improves the performance of mass matching, which proves the complementarity of both tasks. Our method further guides clinicians by providing accurate dual-view mass correspondences, which suggests that it could act as a relevant second opinion for mammogram interpretation and breast cancer diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Improving abdominal image segmentation with overcomplete shape priors.
- Author
-
Sadikine, Amine, Badic, Bogdan, Tasu, Jean-Pierre, Noblet, Vincent, Ballet, Pascal, Visvikis, Dimitris, and Conze, Pierre-Henri
- Subjects
- *
IMAGE segmentation , *DEEP learning , *COMPUTER-aided diagnosis , *IMAGE analysis , *COMPUTER-assisted image analysis (Medicine) , *DIAGNOSTIC imaging , *FUZZY algorithms - Abstract
The extraction of abdominal structures using deep learning has recently experienced a widespread interest in medical image analysis. Automatic abdominal organ and vessel segmentation is highly desirable to guide clinicians in computer-assisted diagnosis, therapy, or surgical planning. Despite a good ability to extract large organs, the capacity of U-Net inspired architectures to automatically delineate smaller structures remains a major issue, especially given the increase in receptive field size as we go deeper into the network. To deal with various abdominal structure sizes while exploiting efficient geometric constraints, we present a novel approach that integrates into deep segmentation shape priors from a semi-overcomplete convolutional auto-encoder (S-OCAE) embedding. Compared to standard convolutional auto-encoders (CAE), it exploits an over-complete branch that projects data onto higher dimensions to better characterize anatomical structures with a small spatial extent. Experiments on abdominal organs and vessel delineation performed on various publicly available datasets highlight the effectiveness of our method compared to state-of-the-art, including U-Net trained without and with shape priors from a traditional CAE. Exploiting a semi-overcomplete convolutional auto-encoder embedding as shape priors improves the ability of deep segmentation models to provide realistic and accurate abdominal structure contours. • A new semi-overcomplete convolutional auto-encoder is proposed to obtain shape priors. • The resulting overcomplete shape priors are integrated into a deep segmentation pipeline. • Experiments focus on abdominal organ and vessel segmentation from public datasets. • Our method outperforms U-Net without/with shape priors from a standard auto-encoder. • A frequency analysis of shape codes is provided in addition to segmentation scores. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Deep active learning for dual-view mammogram analysis
- Author
-
Pierre-Henri Conze, Gwenole Quellec, Mathieu Lamard, Heng Zhang, Béatrice Cochener, Yutong Yan, Gouenou Coatrieux, Conze, Pierre-Henri, Université de Brest (UBO), Laboratoire de Traitement de l'Information Medicale (LaTIM), Université de Brest (UBO)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre Hospitalier Régional Universitaire de Brest (CHRU Brest)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut Brestois Santé Agro Matière (IBSAM), Département lmage et Traitement Information (IMT Atlantique - ITI), IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut National de la Santé et de la Recherche Médicale (INSERM), Centre Hospitalier Régional Universitaire de Brest (CHRU Brest), Université de Brest (UBO)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre Hospitalier Régional Universitaire de Brest (CHRU Brest)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), and Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique)
- Subjects
0303 health sciences ,Active learning (machine learning) ,business.industry ,Computer science ,Deep learning ,[INFO.INFO-IM] Computer Science [cs]/Medical Imaging ,Pattern recognition ,Oracle ,03 medical and health sciences ,Consistency (database systems) ,Annotation ,0302 clinical medicine ,[INFO.INFO-TI] Computer Science [cs]/Image Processing [eess.IV] ,Computer-aided diagnosis ,030220 oncology & carcinogenesis ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,Medical imaging ,[INFO.INFO-IM]Computer Science [cs]/Medical Imaging ,Segmentation ,Artificial intelligence ,business ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,ComputingMilieux_MISCELLANEOUS ,030304 developmental biology ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
Supervised deep learning on medical imaging requires massive manual annotations, which are expertise-needed and time-consuming to perform. Active learning aims at reducing annotation efforts by adaptively selecting the most informative samples for labeling. We propose in this paper a novel deep active learning approach for dual-view mammogram analysis, especially for breast mass segmentation and detection, where the necessity of labeling is estimated by exploiting the consistency of predictions arising from craniocaudal (CC) and mediolateral-oblique (MLO) views. Intuitively, if mass segmentation or detection is robustly performed, prediction results achieved on CC and MLO views should be consistent. Exploiting the inter-view consistency is hence a good way to guide the sampling mechanism which iteratively selects the next image pairs to be labeled by an oracle. Experiments on public DDSM-CBIS and INbreast datasets demonstrate that comparable performance with respect to fully-supervised models can be reached using only 6.83% (9.56%) of labeled data for segmentation (detection). This suggests that combining dual-view mammogram analysis and active learning can strongly contribute to the development of computer-aided diagnosis systems.
- Published
- 2021
12. Longitudinal detection of diabetic retinopathy early severity grade changes using deep learning
- Author
-
Gouenou Coatrieux, Béatrice Cochener, Mathieu Lamard, Pierre-Henri Conze, Pascale Massin, Gwenole Quellec, Yutong Yan, Conze, Pierre-Henri, Laboratoire de Traitement de l'Information Medicale (LaTIM), Université de Brest (UBO)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre Hospitalier Régional Universitaire de Brest (CHRU Brest)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut Brestois Santé Agro Matière (IBSAM), Université de Brest (UBO), Département lmage et Traitement Information (IMT Atlantique - ITI), IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Institut National de la Santé et de la Recherche Médicale (INSERM), Service d'ophthalmologie [CHU Lariboisière], Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Hôpital Lariboisière-Fernand-Widal [APHP], Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Université Paris Cité (UPCité), Centre Hospitalier Régional Universitaire de Brest (CHRU Brest), Université de Brest (UBO)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre Hospitalier Régional Universitaire de Brest (CHRU Brest)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), and Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Université de Paris (UP)
- Subjects
Fusion scheme ,Computer science ,business.industry ,Deep learning ,Feature vector ,[INFO.INFO-IM] Computer Science [cs]/Medical Imaging ,CAD ,Diabetic retinopathy ,medicine.disease ,Machine learning ,computer.software_genre ,Information fusion ,[INFO.INFO-TI] Computer Science [cs]/Image Processing [eess.IV] ,Computer-aided diagnosis ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,medicine ,[INFO.INFO-IM]Computer Science [cs]/Medical Imaging ,Artificial intelligence ,business ,computer ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,Change detection ,ComputingMilieux_MISCELLANEOUS ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
Longitudinal medical image analysis is crucial for identifying the unobvious emergence and evolution of early lesions, towards earlier and better patient-specific pathology management. However, traditional computer-aided diagnosis (CAD) systems for diabetic retinopathy (DR) rarely make use of longitudinal information to improve DR analysis. In this work, we present a deep information fusion framework that exploits two consecutive longitudinal studies for the assessment of early DR severity changes. In particular, three fusion schemes are investigated: (1) early fusion of inputs, (2) intermediate fusion of feature vectors incorporating Spatial Transformer Networks (STN) and (3) late fusion of feature vectors. Exhaustive experiments compared with respect to no-fusion baselines validate that incorporating prior DR studies can improve the referable DR severity classification performance through the late fusion scheme whose AUC reaches 0.9296. Advantages and limitations of the different fusion methods are discussed in depth. We also propose different pre-training strategies which are employed to bring considerable performance gains for DR severity grade change detection purposes.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.