Back to Search Start Over

Validation and tuning of in situ transcriptomics image processing workflows with crowdsourced annotations.

Authors :
Vo-Phamhi, Jenny M.
Yamauchi, Kevin A.
Gómez-Sjöberg, Rafael
Source :
PLoS Computational Biology. 8/9/2021, Vol. 17 Issue 8, p1-22. 22p. 1 Color Photograph, 4 Graphs.
Publication Year :
2021

Abstract

Recent advancements in in situ methods, such as multiplexed in situ RNA hybridization and in situ RNA sequencing, have deepened our understanding of the way biological processes are spatially organized in tissues. Automated image processing and spot-calling algorithms for analyzing in situ transcriptomics images have many parameters which need to be tuned for optimal detection. Having ground truth datasets (images where there is very high confidence on the accuracy of the detected spots) is essential for evaluating these algorithms and tuning their parameters. We present a first-in-kind open-source toolkit and framework for in situ transcriptomics image analysis that incorporates crowdsourced annotations, alongside expert annotations, as a source of ground truth for the analysis of in situ transcriptomics images. The kit includes tools for preparing images for crowdsourcing annotation to optimize crowdsourced workers' ability to annotate these images reliably, performing quality control (QC) on worker annotations, extracting candidate parameters for spot-calling algorithms from sample images, tuning parameters for spot-calling algorithms, and evaluating spot-calling algorithms and worker performance. These tools are wrapped in a modular pipeline with a flexible structure that allows users to take advantage of crowdsourced annotations from any source of their choice. We tested the pipeline using real and synthetic in situ transcriptomics images and annotations from the Amazon Mechanical Turk system obtained via Quanti.us. Using real images from in situ experiments and simulated images produced by one of the tools in the kit, we studied worker sensitivity to spot characteristics and established rules for annotation QC. We explored and demonstrated the use of ground truth generated in this way for validating spot-calling algorithms and tuning their parameters, and confirmed that consensus crowdsourced annotations are a viable substitute for expert-generated ground truth for these purposes. Author summary: To understand important biological processes such as development, wound healing, and disease, it is necessary to study where different genes are expressed in a tissue. RNA molecules can be visualized within tissues by using in situ transcriptomics tools, which use fluorescent probes that bind to specific RNA target molecules and appear in microscopy images as bright spots. Algorithms can be used to find the locations of these spots, but ground truth datasets (images with spots located to high accuracy) are needed to evaluate these algorithms and tune their parameters. While the typical way of generating ground truth datasets is having an expert annotate the spots by hand, many in situ transcriptomics image datasets are too large for this. However, it is often easy for non-experts to identify the spots with minimal training. In this paper, we present an open-source toolkit and framework that incorporates crowdsourced annotations alongside expert annotations as a source of ground truth for the analysis of in situ transcriptomics images. We explored and demonstrated the use of this framework for validating spot-calling algorithms and tuning their parameters, and confirmed that consensus crowdsourced annotations are a viable substitute for expert-generated ground truth. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1553734X
Volume :
17
Issue :
8
Database :
Academic Search Index
Journal :
PLoS Computational Biology
Publication Type :
Academic Journal
Accession number :
151817714
Full Text :
https://doi.org/10.1371/journal.pcbi.1009274