26 results on '"Wieslander, Håkan"'
Search Results
2. Evaluating the utility of brightfield image data for mechanism of action prediction
- Author
-
Harrison, Philip John, Gupta, Ankit, Rietdijk, Jonne, Wieslander, Håkan, Carreras-Puigvert, Jordi, Georgiev, Polina, Wählby, Carolina, Spjuth, Ola, Sintorn, Ida-Maria, Harrison, Philip John, Gupta, Ankit, Rietdijk, Jonne, Wieslander, Håkan, Carreras-Puigvert, Jordi, Georgiev, Polina, Wählby, Carolina, Spjuth, Ola, and Sintorn, Ida-Maria
- Abstract
Fluorescence staining techniques, such as Cell Painting, together with fluorescence microscopy have proven invaluable for visualizing and quantifying the effects that drugs and other perturbations have on cultured cells. However, fluorescence microscopy is expensive, time-consuming, labor-intensive, and the stains applied can be cytotoxic, interfering with the activity under study. The simplest form of microscopy, brightfield microscopy, lacks these downsides, but the images produced have low contrast and the cellular compartments are difficult to discern. Nevertheless, by harnessing deep learning, these brightfield images may still be sufficient for various predictive purposes. In this study, we compared the predictive performance of models trained on fluorescence images to those trained on brightfield images for predicting the mechanism of action (MoA) of different drugs. We also extracted CellProfiler features from the fluorescence images and used them to benchmark the performance. Overall, we found comparable and largely correlated predictive performance for the two imaging modalities. This is promising for future studies of MoAs in time-lapse experiments for which using fluorescence images is problematic. Explorations based on explainable AI techniques also provided valuable insights regarding compounds that were better predicted by one modality over the other., De två första författarna delar förstaförfattarskapetDe två sista författarna delar sistaförfattarskapet
- Published
- 2023
- Full Text
- View/download PDF
3. Is brightfield all you need for mechanism of action prediction?
- Author
-
Gupta, Ankit, primary, Harrison, Philip J, additional, Wieslander, Håkan, additional, Rietdijk, Jonne, additional, Puigvert, Jordi Carreras, additional, Georgiev, Polina, additional, Wählby, Carolina, additional, Spjuth, Ola, additional, and Sintorn, Ida-Maria, additional
- Published
- 2022
- Full Text
- View/download PDF
4. Application, Optimisation and Evaluation of Deep Learning for Biomedical Imaging
- Author
-
Wieslander, Håkan and Wieslander, Håkan
- Abstract
Microscopy imaging is a powerful technique when studying biology at a cellular and sub-cellular level. When combined with digital image analysis it creates an invaluable tool for investigating complex biological processes and phenomena. However, imaging at the cell and sub-cellular level tends to generate large amounts of data which can be difficult to analyse, navigate and store. Despite these difficulties, large data volumes mean more information content which is beneficial for computational methods like machine learning, especially deep learning. The union of microscopy imaging and deep learning thus provides numerous opportunities for advancing our scientific understanding and uncovering interesting and useful biological insights. The work in this thesis explores various means for optimising information extraction from microscopy data utilising image analysis with deep learning. The focus is on three different imaging modalities: bright-field; fluorescence; and transmission electron microscopy. Within these modalities different learning-based image analysis and processing techniques are explored, ranging from image classification and detection to image restoration and translation. The main contributions are: (i) a computational method for diagnosing oral and cervical cancer based on smear samples and bright-field microscopy; (ii) a hierarchical analysis of whole-slide tissue images from fluorescence microscopy and introducing a confidence based measure for pixel classifications; (iii) an image restoration model for motion-degraded images from transmission electron microscopy with an evaluation of model overfitting on underlying textures; and (iv) an image-to-image translation (virtual staining) of cell images from bright-field to fluorescence microscopy, optimised for biological feature relevance. A common theme underlying all the investigations in this thesis is that the evaluation of the methods used is in relation to the biological question at hand.
- Published
- 2022
5. Is brightfield all you need for MoA prediction?
- Author
-
Gupta, Ankit, Harrison, Philip J, Wieslander, Håkan, Rietdijk, Jonne, Carreras-Puigvert, Jordi, Georgiev, Polina, Wählby, Carolina, Spjuth, Ola, Sintorn, Ida-Maria, Gupta, Ankit, Harrison, Philip J, Wieslander, Håkan, Rietdijk, Jonne, Carreras-Puigvert, Jordi, Georgiev, Polina, Wählby, Carolina, Spjuth, Ola, and Sintorn, Ida-Maria
- Abstract
Fluorescence staining techniques, such as Cell Painting, together with fluorescence microscopy have proven invaluable for visualizing and quantifying the effects that drugs and other perturbations have on cultured cells. However, fluorescence microscopy is expensive, time-consuming, and labor-intensive, and the stains applied can be cytotoxic, interfering with the activity under study. The simplest form of microscopy, brightfield microscopy, lacks these downsides, but the images produced have low contrast and the cellular compartments are difficult to discern. Nevertheless, by harnessing deep learning, these brightfield images may still be sufficient for various predictive purposes. In this study, we compared the predictive performance of models trained on fluorescence images to those trained on brightfield images for predicting the mechanism of action (MoA) of different drugs. We also extracted CellProfiler features from the fluorescence images and used them to benchmark the performance. Overall, we found comparable and correlated predictive performance for the two imaging modalities. This is promising for future studies of MoAs in time-lapse experiments.
- Published
- 2022
6. Learning to see colours: Biologically relevant virtual staining for adipocyte cell images
- Author
-
Wieslander, Håkan, primary, Gupta, Ankit, additional, Bergman, Ebba, additional, Hallström, Erik, additional, and Harrison, Philip John, additional
- Published
- 2021
- Full Text
- View/download PDF
7. Rapid development of cloud-native intelligent data pipelines for scientific data streams using the HASTE Toolkit
- Author
-
Blamey, Ben, Toor, Salman, Dahlö, Martin, Wieslander, Håkan, Harrison, Philip J, Sintorn, Ida-Maria, Sabirsh, Alan, Wählby, Carolina, Spjuth, Ola, and Hellander, Andreas
- Subjects
Diagnostic Imaging ,HASTE ,tiered storage ,Computer Sciences ,AcademicSubjects/SCI02254 ,interestingness functions ,Biological Science Disciplines ,Datavetenskap (datalogi) ,image analysis ,Technical Note ,AcademicSubjects/SCI00960 ,Software ,stream processing - Abstract
BACKGROUND: Large streamed datasets, characteristic of life science applications, are often resource-intensive to process, transport and store. We propose a pipeline model, a design pattern for scientific pipelines, where an incoming stream of scientific data is organized into a tiered or ordered "data hierarchy". We introduce the HASTE Toolkit, a proof-of-concept cloud-native software toolkit based on this pipeline model, to partition and prioritize data streams to optimize use of limited computing resources. FINDINGS: In our pipeline model, an "interestingness function" assigns an interestingness score to data objects in the stream, inducing a data hierarchy. From this score, a "policy" guides decisions on how to prioritize computational resource use for a given object. The HASTE Toolkit is a collection of tools to adopt this approach. We evaluate with 2 microscopy imaging case studies. The first is a high content screening experiment, where images are analyzed in an on-premise container cloud to prioritize storage and subsequent computation. The second considers edge processing of images for upload into the public cloud for real-time control of a transmission electron microscope. CONCLUSIONS: Through our evaluation, we created smart data pipelines capable of effective use of storage, compute, and network resources, enabling more efficient data-intensive experiments. We note a beneficial separation between scientific concerns of data priority, and the implementation of this behaviour for different resources in different deployment contexts. The toolkit allows intelligent prioritization to be `bolted on' to new and existing systems - and is intended for use with a range of technologies in different deployment scenarios. Spjuth and Hellander shared senior authorship
- Published
- 2021
8. Deep learning and conformal prediction for hierarchical analysis of large-scale whole-slide tissue images
- Author
-
Wieslander, Håkan, Harrison, Philip J., Skogberg, Gabriel, Jackson, Sonya, Fridén, Markus, Karlsson, Johan, Spjuth, Ola, Wählby, Carolina, Wieslander, Håkan, Harrison, Philip J., Skogberg, Gabriel, Jackson, Sonya, Fridén, Markus, Karlsson, Johan, Spjuth, Ola, and Wählby, Carolina
- Abstract
With the increasing amount of image data collected from biomedical experiments there is an urgent need for smarter and more effective analysis methods. Many scientific questions require analysis of image subregions related to some specific biology. Finding such regions of interest (ROIs) at low resolution and limiting the data subjected to final quantification at high resolution can reduce computational requirements and save time. In this paper we propose a three-step pipeline: First, bounding boxes for ROIs are located at low resolution. Next, ROIs are subjected to semantic segmentation into sub-regions at mid-resolution. We also estimate the confidence of the segmented sub-regions. Finally, quantitative measurements are extracted at high resolution. We use deep learning for the first two steps in the pipeline and conformal prediction for confidence assessment. We show that limiting final quantitative analysis to sub regions with high confidence reduces noise and increases separability of observed biological effects.
- Published
- 2021
- Full Text
- View/download PDF
9. Learning to see colours: Biologically relevant virtual staining for adipocyte cell images
- Author
-
Wieslander, Håkan, Gupta, Ankit, Bergman, Ebba, Hallström, Erik, Harrison, Philip John, Wieslander, Håkan, Gupta, Ankit, Bergman, Ebba, Hallström, Erik, and Harrison, Philip John
- Abstract
Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images using virtual staining (also known as “label-free prediction” and “in-silico labeling”) can get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels.
- Published
- 2021
- Full Text
- View/download PDF
10. Deep-learning models for lipid nanoparticle-based drug delivery
- Author
-
Harrison, Philip J., Wieslander, Håkan, Sabirsh, Alan, Karlsson, Johan, Malmsjö, Victor, Hellander, Andreas, Wählby, Carolina, Spjuth, Ola, Harrison, Philip J., Wieslander, Håkan, Sabirsh, Alan, Karlsson, Johan, Malmsjö, Victor, Hellander, Andreas, Wählby, Carolina, and Spjuth, Ola
- Abstract
Background: Early prediction of time-lapse microscopy experiments enables intelligent data management and decision-making. Aim: Using time-lapse data of HepG2 cells exposed to lipid nanoparticles loaded with mRNA for expression of GFP, the authors hypothesized that it is possible to predict in advance whether a cell will express GFP. Methods: The first modeling approach used a convolutional neural network extracting per-cell features at early time points. These features were then combined and explored using either a long short-term memory network (approach 2) or time series feature extraction and gradient boosting machines (approach 3). Results: Accounting for the temporal dynamics significantly improved performance. Conclusion: The results highlight the benefit of accounting for temporal dynamics when studying drug delivery using high-content imaging.
- Published
- 2021
- Full Text
- View/download PDF
11. TEM image restoration from fast image streams
- Author
-
Wieslander, Håkan, Wählby, Carolina, Sintorn, Ida-Maria, Wieslander, Håkan, Wählby, Carolina, and Sintorn, Ida-Maria
- Abstract
Microscopy imaging experiments generate vast amounts of data, and there is a high demand for smart acquisition and analysis methods. This is especially true for transmission electron microscopy (TEM) where terabytes of data are produced if imaging a full sample at high resolution, and analysis can take several hours. One way to tackle this issue is to collect a continuous stream of low resolution images whilst moving the sample under the microscope, and thereafter use this data to find the parts of the sample deemed most valuable for high-resolution imaging. However, such image streams are degraded by both motion blur and noise. Building on deep learning based approaches developed for deblurring videos of natural scenes we explore the opportunities and limitations of deblurring and denoising images captured from a fast image stream collected by a TEM microscope. We start from existing neural network architectures and make adjustments of convolution blocks and loss functions to better fit TEM data. We present deblurring results on two real datasets of images of kidney tissue and a calibration grid. Both datasets consist of low quality images from a fast image stream captured by moving the sample under the microscope, and the corresponding high quality images of the same region, captured after stopping the movement at each position to let all motion settle. We also explore the generalizability and overfitting on real and synthetically generated data. The quality of the restored images, evaluated both quantitatively and visually, show that using deep learning for image restoration of TEM live image streams has great potential but also comes with some limitations.
- Published
- 2021
- Full Text
- View/download PDF
12. Deep-learning models for lipid nanoparticle-based drug delivery
- Author
-
Harrison, Philip J, primary, Wieslander, Håkan, additional, Sabirsh, Alan, additional, Karlsson, Johan, additional, Malmsjö, Victor, additional, Hellander, Andreas, additional, Wählby, Carolina, additional, and Spjuth, Ola, additional
- Published
- 2021
- Full Text
- View/download PDF
13. Deep learning models for lipid-nanoparticle-based drug delivery
- Author
-
Harrison, Philip John, Wieslander, Håkan, Sabirsh, Alan, Karlsson, Johan, Malmsjö, Victor, Hellander, Andreas, Wählby, Carolina, and Spjuth, Ola
- Subjects
Messenger RNA ,Computer science ,business.industry ,Deep learning ,Feature extraction ,RNA ,Pattern recognition ,Cell morphology ,Convolutional neural network ,Green fluorescent protein ,Test set ,Gradient boosting ,Artificial intelligence ,business - Abstract
Large-scale time-lapse microscopy experiments are useful to understand delivery and expression in RNA-based therapeutics. The resulting data has high dimensionality and high (but sparse) information content, making it challenging and costly to store and process. Early prediction of experimental outcome enables intelligent data management and decision making. We start from time-lapse data of HepG2 cells exposed to lipid-nanoparticles loaded with mRNA for expression of green fluorescent protein (GFP). We hypothesize that it is possible to predict if a cell will express GFP or not based on cell morphology at time-points prior to GFP expression. Here we present results on per-cell classification (GFP expression/no GFP expression) and regression (level of GFP expression) using three different approaches. In the first approach we use a convolutional neural network extracting per-cell features at each time point. We then utilize the same features combined with: a long-short-term memory (LSTM) network encoding temporal dynamics (approach 2); and time-series feature extraction using the python package tsfresh followed by principal component analysis and gradient boosting machines (approach 3), to reach a final classification or regression result. Application of the three approaches to a previously unanalyzed test set of cells showed good predictive performance of all three approaches but that accounting for the temporal dynamics via LSTMs or tsfresh led to significantly improved performance. The predictions made by the LSTM and tsfresh applications were not significantly different. The results highlight the benefit of accounting for temporal dynamics when studying drug delivery using high content imaging.
- Published
- 2020
- Full Text
- View/download PDF
14. Deep learning and conformal prediction for hierarchical analysis of large-scale whole-slide tissue images
- Author
-
Wieslander, Håkan, Harrison, Philip J., Skogberg, Gabriel, Jackson, Sonya, Fridén, Markus, Karlsson, Johan, Spjuth, Ola, and Wählby, Carolina
- Subjects
Medical Image Processing ,Medicinsk bildbehandling ,deep learning ,Conformal prediction ,hierarchical analysis ,digital pathology - Abstract
With the increasing amount of image data collected from biomedical experiments there is an urgent need for smarter and more effective analysis methods. Many scientific questions require analysis of image subregions related to some specific biology. Finding such regions of interest (ROIs) at low resolution and limiting the data subjected to final quantification at high resolution can reduce computational requirements and save time. In this paper we propose a three-step pipeline: First, bounding boxes for ROIs are located at low resolution. Next, ROIs are subjected to semantic segmentation into sub-regions at mid-resolution. We also estimate the confidence of the segmented sub-regions. Finally, quantitative measurements are extracted at high resolution. We use deep learning for the first two steps in the pipeline and conformal prediction for confidence assessment. We show that limiting final quantitative analysis to sub regions with high confidence reduces noise and increases separability of observed biological effects.
- Published
- 2020
- Full Text
- View/download PDF
15. Rapid development of cloud-native intelligent data pipelines for scientific data streams using the HASTE Toolkit
- Author
-
Blamey, Ben, primary, Toor, Salman, additional, Dahlö, Martin, additional, Wieslander, Håkan, additional, Harrison, Philip J, additional, Sintorn, Ida-Maria, additional, Sabirsh, Alan, additional, Wählby, Carolina, additional, Spjuth, Ola, additional, and Hellander, Andreas, additional
- Published
- 2021
- Full Text
- View/download PDF
16. TEM image restoration from fast image streams
- Author
-
Wieslander, Håkan, primary, Wählby, Carolina, additional, and Sintorn, Ida-Maria, additional
- Published
- 2021
- Full Text
- View/download PDF
17. Learning to see colours: generating biologically relevant fluorescent labels from bright-field images
- Author
-
Wieslander, Håkan, primary, Gupta, Ankit, additional, Bergman, Ebba, additional, Hallström, Erik, additional, and Harrison, Philip J, additional
- Published
- 2021
- Full Text
- View/download PDF
18. Rapid development of cloud-native intelligent data pipelines for scientific data streams using the HASTE Toolkit
- Author
-
Blamey, Ben, primary, Toor, Salman, additional, Dahlö, Martin, additional, Wieslander, Håkan, additional, Harrison, Philip J, additional, Sintorn, Ida-Maria, additional, Sabirsh, Alan, additional, Wählby, Carolina, additional, Spjuth, Ola, additional, and Hellander, Andreas, additional
- Published
- 2020
- Full Text
- View/download PDF
19. Deep learning models for lipid-nanoparticle-based drug delivery
- Author
-
Harrison, Philip J, primary, Wieslander, Håkan, additional, Sabirsh, Alan, additional, Karlsson, Johan, additional, Malmsjö, Victor, additional, Hellander, Andreas, additional, Wählby, Carolina, additional, and Spjuth, Ola, additional
- Published
- 2020
- Full Text
- View/download PDF
20. Deep Learning in Image Cytometry : A Review
- Author
-
Gupta, Anindya, Harrison, Philip J., Wieslander, Håkan, Pielawski, Nicolas, Kartasalo, Kimmo, Partel, Gabriele, Solorzano, Leslie, Suveer, Amit, Klemm, Anna H., Spjuth, Ola, Sintorn, Ida-Maria, Wählby, Carolina, Gupta, Anindya, Harrison, Philip J., Wieslander, Håkan, Pielawski, Nicolas, Kartasalo, Kimmo, Partel, Gabriele, Solorzano, Leslie, Suveer, Amit, Klemm, Anna H., Spjuth, Ola, Sintorn, Ida-Maria, and Wählby, Carolina
- Published
- 2019
- Full Text
- View/download PDF
21. HarmonicIO : Scalable data stream processing for scientific datasets
- Author
-
Torruangwatthana, Preechakorn, Wieslander, Håkan, Blamey, Ben, Hellander, Andreas, Toor, Salman, Torruangwatthana, Preechakorn, Wieslander, Håkan, Blamey, Ben, Hellander, Andreas, and Toor, Salman
- Abstract
eSSENCE
- Published
- 2018
- Full Text
- View/download PDF
22. Detection of Malignancy-Associated Changes Due to Precancerous and Oral Cancer Lesions: A Pilot Study Using Deep Learning
- Author
-
Bengtsson, Ewert, Wieslander, Håkan, Forslid, Gustav, Wählby, Carolina, Hirsch, Jan-Michael, Runow Stark, Christina, Kecheril Sadanandan, Sajith, Lindblad, Joakim, Bengtsson, Ewert, Wieslander, Håkan, Forslid, Gustav, Wählby, Carolina, Hirsch, Jan-Michael, Runow Stark, Christina, Kecheril Sadanandan, Sajith, and Lindblad, Joakim
- Abstract
Background: The incidence of oral cancer is increasing and it is effecting younger individuals. PAP smear-based screening, visual, and automated, have been used for decades, to successfully decrease the incidence of cervical cancer. Can similar methods be used for oral cancer screening? We have carried out a pilot study using neural networks for classifying cells, both from cervical cancer and oral cancer patients. The results which were reported from a technical point of view at the 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), were particularly interesting for the oral cancer cases, and we are currently collecting and analyzing samples from more patients. Methods: Samples were collected with a brush in the oral cavity and smeared on glass slides, stained, and prepared, according to standard PAP procedures. Images from the slides were digitized with a 0.35 micron pixel size, using focus stacks with 15 levels 0.4 micron apart. Between 245 and 2,123 cell nuclei were manually selected for analysis for each of 14 datasets, usually 2 datasets for each of the 6 cases, in total around 15,000 cells. A small region was cropped around each nucleus, and the best 2 adjacent focus layers in each direction were automatically found, thus creating images of 100x100x5 pixels. Nuclei were chosen with an aim to select well preserved free-lying cells, with no effort to specifically select diagnostic cells. We therefore had no ground truth on the cellular level, only on the patient level. Subsets of these images were used for training 2 sets of neural networks, created according to the ResNet and VGG architectures described in literature, to distinguish between cells from healthy persons, and those with precancerous lesions. The datasets were augmented through mirroring and 90 degrees rotations. The resulting networks were used to classify subsets of cells from different persons, than those in the training sets. This was repeated for a total of 5 folds. Result
- Published
- 2018
23. Deep Convolutional Neural Networks For Detecting Cellular Changes Due To Malignancy
- Author
-
Wieslander, Håkan and Forslid, Gustav
- Subjects
Deep Learning ,Teknik och teknologier ,Convolutional Neural Networks ,Engineering and Technology ,Cancer Screening - Abstract
Discovering cancer at an early stage is an effective way to increase the chance of survival. However, since most screening processes are done manually it is time inefficient and thus costly. One way of automizing the screening process could be to classify cells using Convolutional Neural Networks. Convolutional Neural Networks have been proven to produce high accuracy for image classification tasks. This thesis investigates if Convolutional Neural Networks can be used as a tool to detect cellular changes due to malignancy in the oral cavity and uterine cervix. Two datasets containing oral cells and two datasets containing cervical cells were used. The cells were divided into normal and abnormal cells for a binary classification. The performance was evaluated for two different network architectures, ResNet and VGG. For the oral datasets the accuracy varied between 78-82% correctly classified cells depending on the dataset and network. For the cervical datasets the accuracy varied between 84-86% correctly classified cells depending on the dataset and network. These results indicates a high potential for classifying abnormalities for oral and cervical cells. ResNet was shown to be the preferable network, with a higher accuracy and a smaller standard deviation.
- Published
- 2017
24. Deep Learning in Image Cytometry: A Review
- Author
-
Gupta, Anindya, primary, Harrison, Philip J., additional, Wieslander, Håkan, additional, Pielawski, Nicolas, additional, Kartasalo, Kimmo, additional, Partel, Gabriele, additional, Solorzano, Leslie, additional, Suveer, Amit, additional, Klemm, Anna H., additional, Spjuth, Ola, additional, Sintorn, Ida‐Maria, additional, and Wählby, Carolina, additional
- Published
- 2018
- Full Text
- View/download PDF
25. Utvecklande av träninsapp för iOS 7
- Author
-
Forslid, Gustav and Wieslander, Håkan
- Abstract
Projektet gick ut på att göra en undersökning för att ta reda på hur en bra app är uppbyggd och vad folk är intresserade av för funktioner i en träningsapp. Denna undersökning skulle sedan leda fram till en app som hjälper användaren att motivera sig själv till att träna. För att programmera appar till Apples telefoner använder man sig av kodspråket Objective-C i programmeringsplattformen Xcode. Resultatet av undersökningen visade att användarvänlighet och enkel design var viktigast för uppbyggnaden av en app. Gällande intressanta funktioner för en träningsapp var statistikförande av träningen, exempel på övningar samt påminnelser om träning populära. Resultatet av projektet blev en app där användaren kan skapa olika träningspass samt se statistik över träningsutvecklingen.
- Published
- 2014
26. Deep Learning in Image Cytometry: A Review.
- Author
-
Gupta A, Harrison PJ, Wieslander H, Pielawski N, Kartasalo K, Partel G, Solorzano L, Suveer A, Klemm AH, Spjuth O, Sintorn IM, and Wählby C
- Subjects
- Animals, Artificial Intelligence trends, Humans, Image Cytometry instrumentation, Image Cytometry trends, Image Processing, Computer-Assisted methods, Machine Learning, Microscopy instrumentation, Microscopy methods, Neural Networks, Computer, Deep Learning trends, Image Cytometry methods
- Abstract
Artificial intelligence, deep convolutional neural networks, and deep learning are all niche terms that are increasingly appearing in scientific presentations as well as in the general media. In this review, we focus on deep learning and how it is applied to microscopy image data of cells and tissue samples. Starting with an analogy to neuroscience, we aim to give the reader an overview of the key concepts of neural networks, and an understanding of how deep learning differs from more classical approaches for extracting information from image data. We aim to increase the understanding of these methods, while highlighting considerations regarding input data requirements, computational resources, challenges, and limitations. We do not provide a full manual for applying these methods to your own data, but rather review previously published articles on deep learning in image cytometry, and guide the readers toward further reading on specific networks and methods, including new methods not yet applied to cytometry data. © 2018 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry., (© 2018 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.)
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.