10 results on '"Vilaplana, Verónica"'
Search Results
2. Leishmaniasis Parasite Segmentation and Classification Using Deep Learning
- Author
-
Górriz, Marc, Aparicio, Albert, Raventós, Berta, Vilaplana, Verónica, Sayrol, Elisa, López-Codina, Daniel, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, and Perales, Francisco José, editor
- Published
- 2018
- Full Text
- View/download PDF
3. Monte-Carlo Sampling Applied to Multiple Instance Learning for Histological Image Classification
- Author
-
Combalia, Marc, Vilaplana, Verónica, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Stoyanov, Danail, editor, Taylor, Zeike, editor, Carneiro, Gustavo, editor, Syeda-Mahmood, Tanveer, editor, Martel, Anne, editor, Maier-Hein, Lena, editor, Tavares, João Manuel R.S., editor, Bradley, Andrew, editor, Papa, João Paulo, editor, Belagiannis, Vasileios, editor, Nascimento, Jacinto C., editor, Lu, Zhi, editor, Conjeti, Sailesh, editor, Moradi, Mehdi, editor, Greenspan, Hayit, editor, and Madabhushi, Anant, editor
- Published
- 2018
- Full Text
- View/download PDF
4. Looking behind occlusions: A study on amodal segmentation for robust on-tree apple fruit size estimation
- Author
-
Gené-Mola, Jordi, Ferrer-Ferrer, Mar, Gregorio, Eduard, Blok, Pieter M., Hemming, Jochen, Morros, Josep Ramon, Rosell-Polo, Joan R., Vilaplana, Verónica, Ruiz-Hidalgo, Javier, Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions, and Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group
- Subjects
Crop Physiology ,Precision agriculture ,Precision farming ,Forestry ,Deep learning ,Enginyeria de la telecomunicació::Processament del senyal::Processament de la imatge i del senyal vídeo [Àrees temàtiques de la UPC] ,Imatges -- Processament ,Horticulture ,Computer Science Applications ,Yield estimation ,Agricultura de precisió ,Image processing ,Fruit visibility ,Fruit detection ,Fruit measurement ,Agro Field Technology Innovations ,Gewasfysiologie ,Agronomy and Crop Science ,Aprenentatge profund - Abstract
The detection and sizing of fruits with computer vision methods is of interest because it provides relevant information to improve the management of orchard farming. However, the presence of partially occluded fruits limits the performance of existing methods, making reliable fruit sizing a challenging task. While previous fruit segmentation works limit segmentation to the visible region of fruits (known as modal segmentation), in this work we propose an amodal segmentation algorithm to predict the complete shape, which includes its visible and occluded regions. To do so, an end-to-end convolutional neural network (CNN) for sim ultaneous modal and amodal instance segmentation was implemented. The predicted amodal masks were used to estimate the fruit diameters in pixels. Modal masks were used to identify the visible region and measure the distance between the apples and the camera using the depth image. Finally, the fruit diameters in millimetres (mm) were computed by applying the pinhole camera model. The method was developed with a Fuji apple dataset consisting of 3925 RGB-D images acquired at different growth stages with a total of 15,335 annotated apples, and was subsequently tested in a case study to measure the diameter of Elstar apples at different growth stages. Fruit detection results showed an F1-score of 0.86 and the fruit diameter results reported a mean absolute error (MAE) of 4.5 mm and R2 = 0.80 irrespective of fruit visibility. Besides the diameter estimation, modal and amodal masks were used to automatically determine the percentage of visibility of measured apples. This feature was used as a confidence value, improving the diameter estimation to MAE = 2.93 mm and R2 = 0.91 when limiting the size estimation to fruits detected with a visibility higher than 60%. The main advantages of the present methodology are its robustness for measuring partially occluded fruits and the capability to determine the visibility percentage. The main limitation is that depth images were generated by means of photogrammetry methods, which limits the efficiency of data acquisition. To overcome this limitation, future works should consider the use of commercial RGB-D sensors. The code and the dataset used to evaluate the method have been made publicly available at https://github.com/GRAP-UdL-AT/Amodal_Fruit_Sizing. This work was partly funded by the Departament de Recerca i Universitats de la Generalitat de Catalunya (grant 2021 LLAV 00088), the Spanish Ministry of Science, Innovation and Universities (grants RTI2018-094222-B-I00 [PAgFRUIT project], PID2021-126648OB-I00 [PAgPROTECT project] and PID2020-117142GB-I00 [DeeLight project] by MCIN/AEI/10.13039/501100011033 and by “ERDF, a way of making Europe”, by the European Union). The work of Jordi Gen´e Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU. We would also like to thank Nufri (especially Santiago Salamero and Oriol Morreres) for their support during data acquisition, and Pieter van Dalfsen and Dirk de Hoog from Wageningen University & Research for additional data collection used in the case study.
- Published
- 2023
5. Multi-modal deep learning for Fuji apple detection using RGB-D cameras and their radiometric capabilities.
- Author
-
Gené-Mola, Jordi, Vilaplana, Verónica, Rosell-Polo, Joan R., Morros, Josep-Ramon, Ruiz-Hidalgo, Javier, and Gregorio, Eduard
- Subjects
- *
DEEP learning , *APPLES , *COLORS , *CROP management , *CAMERAS , *FRUIT - Abstract
• The range corrected intensity from RGB-D sensors is used for fruit detection. • First multi-modal (color, depth, intensity) fruit detection dataset is presented. • Faster R-CNN object detection network is adapted to be used with 5-channel images. • An improvement of 4.46% in F1-score is achieved when using all modalities. • Results show an F1-score of 0.8983 and a mean average precision of 94.8%. Fruit detection and localization will be essential for future agronomic management of fruit crops, with applications in yield prediction, yield mapping and automated harvesting. RGB-D cameras are promising sensors for fruit detection given that they provide geometrical information with color data. Some of these sensors work on the principle of time-of-flight (ToF) and, besides color and depth, provide the backscatter signal intensity. However, this radiometric capability has not been exploited for fruit detection applications. This work presents the KFuji RGB-DS database, composed of 967 multi-modal images containing a total of 12,839 Fuji apples. Compilation of the database allowed a study of the usefulness of fusing RGB-D and radiometric information obtained with Kinect v2 for fruit detection. To do so, the signal intensity was range corrected to overcome signal attenuation, obtaining an image that was proportional to the reflectance of the scene. A registration between RGB, depth and intensity images was then carried out. The Faster R-CNN model was adapted for use with five-channel input images: color (RGB), depth (D) and range-corrected intensity signal (S). Results show an improvement of 4.46% in F1-score when adding depth and range-corrected intensity channels, obtaining an F1-score of 0.898 and an AP of 94.8% when all channels are used. From our experimental results, it can be concluded that the radiometric capabilities of ToF sensors give valuable information for fruit detection. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Single-Image Super-Resolution of Sentinel-2 Low Resolution Bands with Residual Dense Convolutional Neural Networks.
- Author
-
Salgueiro, Luis, Marcello, Javier, and Vilaplana, Verónica
- Subjects
CONVOLUTIONAL neural networks ,MULTISPECTRAL imaging ,SPATIAL resolution - Abstract
Sentinel-2 satellites have become one of the main resources for Earth observation images because they are free of charge, have a great spatial coverage and high temporal revisit. Sentinel-2 senses the same location providing different spatial resolutions as well as generating a multi-spectral image with 13 bands of 10, 20, and 60 m/pixel. In this work, we propose a single-image super-resolution model based on convolutional neural networks that enhances the low-resolution bands (20 m and 60 m) to reach the maximal resolution sensed (10 m) at the same time, whereas other approaches provide two independent models for each group of LR bands. Our proposed model, named Sen2-RDSR, is made up of Residual in Residual blocks that produce two final outputs at maximal resolution, one for 20 m/pixel bands and the other for 60 m/pixel bands. The training is done in two stages, first focusing on 20 m bands and then on the 60 m bands. Experimental results using six quality metrics (RMSE, SRE, SAM, PSNR, SSIM, ERGAS) show that our model has superior performance compared to other state-of-the-art approaches, and it is very effective and suitable as a preliminary step for land and coastal applications, as studies involving pixel-based classification for Land-Use-Land-Cover or the generation of vegetation indices. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. A Dual Network for Super-Resolution and Semantic Segmentation of Sentinel-2 Imagery.
- Author
-
Abadal, Saüc, Salgueiro, Luis, Marcello, Javier, and Vilaplana, Verónica
- Subjects
DEEP learning ,HIGH resolution imaging ,REMOTE sensing ,LAND cover ,SPATIAL resolution ,ELECTRONIC data processing - Abstract
There is a growing interest in the development of automated data processing workflows that provide reliable, high spatial resolution land cover maps. However, high-resolution remote sensing images are not always affordable. Taking into account the free availability of Sentinel-2 satellite data, in this work we propose a deep learning model to generate high-resolution segmentation maps from low-resolution inputs in a multi-task approach. Our proposal is a dual-network model with two branches: the Single Image Super-Resolution branch, that reconstructs a high-resolution version of the input image, and the Semantic Segmentation Super-Resolution branch, that predicts a high-resolution segmentation map with a scaling factor of 2. We performed several experiments to find the best architecture, training and testing on a subset of the S2GLC 2017 dataset. We based our model on the DeepLabV3+ architecture, enhancing the model and achieving an improvement of 5% on IoU and almost 10% on the recall score. Furthermore, our qualitative results demonstrate the effectiveness and usefulness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks.
- Author
-
Salgueiro Romero, Luis, Marcello, Javier, and Vilaplana, Verónica
- Subjects
OPTICAL remote sensing ,MULTISPECTRAL imaging ,DATA distribution ,REMOTE sensing - Abstract
Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of these images might not be sufficient. On the other hand, WorldView commercial satellites offer multi-spectral images with a very high spatial resolution, typically less than 2 m, but their use can be impractical for large areas or multi-temporal analysis due to their high cost. To exploit the free availability of Sentinel imagery, it is worth considering deep learning techniques for single-image super-resolution tasks, allowing the spatial enhancement of low-resolution (LR) images by recovering high-frequency details to produce high-resolution (HR) super-resolved images. In this work, we implement and train a model based on the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with pairs of WorldView-Sentinel images to generate a super-resolved multispectral Sentinel-2 output with a scaling factor of 5. Our model, named RS-ESRGAN, removes the upsampling layers of the network to make it feasible to train with co-registered remote sensing images. Results obtained outperform state-of-the-art models using standard metrics like PSNR, SSIM, ERGAS, SAM and CC. Moreover, qualitative visual analysis shows spatial improvements as well as the preservation of the spectral information, allowing the super-resolved Sentinel-2 imagery to be used in studies requiring very high spatial resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Simultaneous fruit detection and size estimation using multitask deep neural networks.
- Author
-
Ferrer-Ferrer, Mar, Ruiz-Hidalgo, Javier, Gregorio, Eduard, Vilaplana, Verónica, Morros, Josep-Ramon, and Gené-Mola, Jordi
- Subjects
- *
ARTIFICIAL neural networks , *FRUIT , *ARCHITECTURAL design , *APPLES - Abstract
The measurement of fruit size is of great interest to estimate the yield and predict the harvest resources in advance. This work proposes a novel technique for in-field apple detection and measurement based on Deep Neural Networks. The proposed framework was trained with RGB-D data and consists of an end-to-end multitask Deep Neural Network architecture specifically designed to perform the following tasks: 1) detection and segmentation of each fruit from its surroundings; 2) estimation of the diameter of each detected fruit. The methodology was tested with a total of 15,335 annotated apples at different growth stages, with diameters varying from 27 mm to 95 mm. Fruit detection results reported an F1-score for apple detection of 0.88 and a mean absolute error of diameter estimation of 5.64 mm. These are state-of-the-art results with the additional advantages of: a) using an end-to-end multitask trainable network; b) an efficient and fast inference speed; and c) being based on RGB-D data which can be acquired with affordable depth cameras. On the contrary, the main disadvantage is the need of annotating a large amount of data with fruit masks and diameter ground truth to train the model. Finally, a fruit visibility analysis showed an improvement in the prediction when limiting the measurement to apples above 65% of visibility (mean absolute error of 5.09 mm). This suggests that future works should develop a method for automatically identifying the most visible apples and discard the prediction of highly occluded fruits. • An end-to-end trainable multi-task deep neural network was designed. • The network includes two branches: instance segmentation and size regression. • The architecture was adapted to be used with 4-channel RGB + D images. • Fruit detection results reported an F1-score of 0.88. • Fruit size estimation results reported a mean absolute error of 5.64 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. SurvLIMEpy: A Python package implementing SurvLIME.
- Author
-
Pachón-García, Cristian, Hernández-Pérez, Carlos, Delicado, Pedro, and Vilaplana, Verónica
- Subjects
- *
PYTHON programming language , *PROPORTIONAL hazards models , *MACHINE learning , *DEEP learning , *SURVIVAL analysis (Biometry) - Abstract
In this paper we present SurvLIMEpy, an open-source Python package that implements the SurvLIME algorithm. This method allows to compute local feature importance for machine learning algorithms designed for modelling Survival Analysis data. The presented implementation uses a matrix-wise formulation, which allows to speed up the execution time. Additionally, SurvLIMEpy assists the user with visualisation tools to better understand the result of the algorithm. The package supports a wide variety of survival models, from the Cox Proportional Hazards Model to deep learning models such as DeepHit or DeepSurv. Two types of experiments are presented in this paper. First, by means of simulated data, we study the ability of the algorithm to capture the importance of the features. Second, we use three open source survival datasets together with a set of survival algorithms in order to demonstrate how SurvLIMEpy behaves when applied to different models. • Python package implementing SurvLIME algorithm for computing feature importance. • It supports a wide variety of survival models, including deep learning models. • Fast and efficient implementation due to matrix-wise optimisation problems. • An open-source implementation available on GitHub. • Stable release provided to PyPI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.