8 results on '"Tie, Cheng-Wei"'
Search Results
2. Narrow band imaging-based radiogenomics for predicting radiosensitivity in nasopharyngeal carcinoma
- Author
-
Tie, Cheng-Wei, Dong, Xin, Zhu, Ji-Qing, Wang, Kai, Liu, Xu-Dong, Liu, Yu-Meng, Wang, Gui-Qi, Zhang, Ye, and Ni, Xiao-Guang
- Published
- 2024
- Full Text
- View/download PDF
3. Laryngoscopy-based scoring system for the diagnosis of vocal fold leukoplakia.
- Author
-
Ni, Xiao-Guang, Zhu, Ji-Qing, Tie, Cheng-Wei, Wang, Mei-Ling, Zhang, Wei, and Wang, Gui-Qi
- Subjects
TUMOR diagnosis ,STATISTICS ,LEUKOPLAKIA ,MULTIVARIATE analysis ,VOCAL cords ,DIFFERENTIAL diagnosis ,LARYNGEAL tumors ,RETROSPECTIVE studies ,DESCRIPTIVE statistics ,RESEARCH funding ,LARYNGOSCOPY ,DATA analysis software ,LOGISTIC regression analysis ,RECEIVER operating characteristic curves ,ODDS ratio - Abstract
Objective: To propose a scoring system based on laryngoscopic characteristics for the differential diagnosis of benign and malignant vocal fold leukoplakia. Methods: Laryngoscopic images from 200 vocal fold leukoplakia cases were retrospectively analysed. The laryngoscopic signs of benign and malignant vocal fold leukoplakia were compared, and statistically significant features were assigned and accumulated to establish the leukoplakia finding score. Results: A total of five indicators associated with malignant vocal fold leukoplakia were included to construct the leukoplakia finding score, with a possible range of 0–10 points. A score of 6 points or more was indicative of a diagnosis of malignant vocal fold leukoplakia. The sensitivity, specificity and accuracy values of the leukoplakia finding score were 93.8 per cent, 83.6 per cent and 86.0 per cent, respectively. The consistency in the leukoplakia finding score obtained by different laryngologists was strong (kappa = 0.809). Conclusion: This scoring system based on laryngoscopic characteristics has high diagnostic value for distinguishing benign and malignant vocal fold leukoplakia. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. The Detection of Nasopharyngeal Carcinomas Using a Neural Network Based on Nasopharyngoscopic Images.
- Author
-
Wang, Shi‐Xu, Li, Ying, Zhu, Ji‐Qing, Wang, Mei‐Ling, Zhang, Wei, Tie, Cheng‐Wei, Wang, Gui‐Qi, and Ni, Xiao‐Guang
- Abstract
Objective: To construct and validate a deep convolutional neural network (DCNN)‐based artificial intelligence (AI) system for the detection of nasopharyngeal carcinoma (NPC) using archived nasopharyngoscopic images. Methods: We retrospectively collected 14107 nasopharyngoscopic images (7108 NPCs and 6999 noncancers) to construct a DCNN model and prepared a validation dataset containing 3501 images (1744 NPCs and 1757 noncancers) from a single center between January 2009 and December 2020. The DCNN model was established using the You Only Look Once (YOLOv5) architecture. Four otolaryngologists were asked to review the images of the validation set to benchmark the DCNN model performance. Results: The DCNN model analyzed the 3501 images in 69.35 s. For the validation dataset, the precision, recall, accuracy, and F1 score of the DCNN model in the detection of NPCs on white light imaging (WLI) and narrow band imaging (NBI) were 0.845 ± 0.038, 0.942 ± 0.021, 0.920 ± 0.024, and 0.890 ± 0.045, and 0.895 ± 0.045, 0.941 ± 0.018, and 0.975 ± 0.013, 0.918 ± 0.036, respectively. The diagnostic outcome of the DCNN model on WLI and NBI images was significantly higher than that of two junior otolaryngologists (p < 0.05). Conclusion: The DCNN model showed better diagnostic outcomes for NPCs than those of junior otolaryngologists. Therefore, it could assist them in improving their diagnostic level and reducing missed diagnoses. Level of Evidence: 3 Laryngoscope, 134:127–135, 2024 [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Deep Learning for nasopharyngeal Carcinoma Identification Using Both White Light and Narrow-Band Imaging Endoscopy.
- Author
-
Xu, Jianwei, Wang, Jun, Bian, Xianzhang, Zhu, Ji‐Qing, Tie, Cheng‐Wei, Liu, Xiaoqing, Zhou, Zhiyong, Ni, Xiao‐Guang, Qian, Dahong, Zhu, Ji-Qing, Tie, Cheng-Wei, and Ni, Xiao-Guang
- Abstract
Objectives/hypothesis: To develop a deep-learning-based automatic diagnosis system for identifying nasopharyngeal carcinoma (NPC) from noncancer (inflammation and hyperplasia), using both white light imaging (WLI) and narrow-band imaging (NBI) nasopharyngoscopy images.Study Design: Retrospective study.Methods: A total of 4,783 nasopharyngoscopy images (2,898 WLI and 1,885 NBI) of 671 patients were collected and a novel deep convolutional neural network (DCNN) framework was developed named Siamese deep convolutional neural network (S-DCNN), which can simultaneously utilize WLI and NBI images to improve the classification performance. To verify the effectiveness of combining the above-mentioned two modal images for prediction, we compared the proposed S-DCNN with two baseline models, namely DCNN-1 (only considering WLI images) and DCNN-2 (only considering NBI images).Results: In the threefold cross-validation, an overall accuracy and area under the curve of the three DCNNs achieved 94.9% (95% confidence interval [CI] 93.3%-96.5%) and 0.986 (95% CI 0.982-0.992), 87.0% (95% CI 84.2%-89.7%) and 0.930 (95% CI 0.906-0.961), and 92.8% (95% CI 90.4%-95.3%) and 0.971 (95% CI 0.953-0.992), respectively. The accuracy of S-DCNN is significantly improved compared with DCNN-1 (P-value <.001) and DCNN-2 (P-value = .008).Conclusion: Using the deep-learning technology to automatically diagnose NPC under nasopharyngoscopy can provide valuable reference for NPC screening. Superior performance can be obtained by simultaneously utilizing the multimodal features of NBI image and WLI image of the same patient.Level Of Evidence: 3 Laryngoscope, 132:999-1007, 2022. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
6. Multi-Instance Learning for Vocal Fold Leukoplakia Diagnosis Using White Light and Narrow-Band Imaging: A Multicenter Study.
- Author
-
Tie CW, Li DY, Zhu JQ, Wang ML, Wang JH, Chen BH, Li Y, Zhang S, Liu L, Guo L, Yang L, Yang LQ, Wei J, Jiang F, Zhao ZQ, Wang GQ, Zhang W, Zhang QM, and Ni XG
- Abstract
Objectives: Vocal fold leukoplakia (VFL) is a precancerous lesion of laryngeal cancer, and its endoscopic diagnosis poses challenges. We aim to develop an artificial intelligence (AI) model using white light imaging (WLI) and narrow-band imaging (NBI) to distinguish benign from malignant VFL., Methods: A total of 7057 images from 426 patients were used for model development and internal validation. Additionally, 1617 images from two other hospitals were used for model external validation. Modeling learning based on WLI and NBI modalities was conducted using deep learning combined with a multi-instance learning approach (MIL). Furthermore, 50 prospectively collected videos were used to evaluate real-time model performance. A human-machine comparison involving 100 patients and 12 laryngologists assessed the real-world effectiveness of the model., Results: The model achieved the highest area under the receiver operating characteristic curve (AUC) values of 0.868 and 0.884 in the internal and external validation sets, respectively. AUC in the video validation set was 0.825 (95% CI: 0.704-0.946). In the human-machine comparison, AI significantly improved AUC and accuracy for all laryngologists (p < 0.05). With the assistance of AI, the diagnostic abilities and consistency of all laryngologists improved., Conclusions: Our multicenter study developed an effective AI model using MIL and fusion of WLI and NBI images for VFL diagnosis, particularly aiding junior laryngologists. However, further optimization and validation are necessary to fully assess its potential impact in clinical settings., Level of Evidence: 3 Laryngoscope, 2024., (© 2024 The American Laryngological, Rhinological and Otological Society, Inc.)
- Published
- 2024
- Full Text
- View/download PDF
7. Revealing molecular and cellular heterogeneity in hypopharyngeal carcinogenesis through single-cell RNA and TCR/BCR sequencing.
- Author
-
Tie CW, Zhu JQ, Yu Z, Dou LZ, Wang ML, Wang GQ, and Ni XG
- Subjects
- Humans, Male, Biomarkers, Tumor genetics, Gene Expression Regulation, Neoplastic, Receptors, Antigen, B-Cell genetics, Receptors, Antigen, B-Cell metabolism, Receptors, Antigen, T-Cell genetics, Receptors, Antigen, T-Cell metabolism, Sequence Analysis, RNA, Transcriptome, Tumor Microenvironment immunology, Tumor Microenvironment genetics, Carcinogenesis genetics, Hypopharyngeal Neoplasms genetics, Hypopharyngeal Neoplasms pathology, Hypopharyngeal Neoplasms immunology, Single-Cell Analysis, Squamous Cell Carcinoma of Head and Neck genetics, Squamous Cell Carcinoma of Head and Neck immunology, Squamous Cell Carcinoma of Head and Neck pathology
- Abstract
Introduction: Hypopharyngeal squamous cell carcinoma (HSCC) is one of the malignant tumors with the worst prognosis in head and neck cancers. The transformation from normal tissue through low-grade and high-grade intraepithelial neoplasia to cancerous tissue in HSCC is typically viewed as a progressive pathological sequence typical of tumorigenesis. Nonetheless, the alterations in diverse cell clusters within the tissue microenvironment (TME) throughout tumorigenesis and their impact on the development of HSCC are yet to be fully understood., Methods: We employed single-cell RNA sequencing and TCR/BCR sequencing to sequence 60,854 cells from nine tissue samples representing different stages during the progression of HSCC. This allowed us to construct dynamic transcriptomic maps of cells in diverse TME across various disease stages, and experimentally validated the key molecules within it., Results: We delineated the heterogeneity among tumor cells, immune cells (including T cells, B cells, and myeloid cells), and stromal cells (such as fibroblasts and endothelial cells) during the tumorigenesis of HSCC. We uncovered the alterations in function and state of distinct cell clusters at different stages of tumor development and identified specific clusters closely associated with the tumorigenesis of HSCC. Consequently, we discovered molecules like MAGEA3 and MMP3, pivotal for the diagnosis and treatment of HSCC., Discussion: Our research sheds light on the dynamic alterations within the TME during the tumorigenesis of HSCC, which will help to understand its mechanism of canceration, identify early diagnostic markers, and discover new therapeutic targets., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2024 Tie, Zhu, Yu, Dou, Wang, Wang and Ni.)
- Published
- 2024
- Full Text
- View/download PDF
8. Convolutional neural network based anatomical site identification for laryngoscopy quality control: A multicenter study.
- Author
-
Zhu JQ, Wang ML, Li Y, Zhang W, Li LJ, Liu L, Zhang Y, Han CJ, Tie CW, Wang SX, Wang GQ, and Ni XG
- Subjects
- Humans, Laryngoscopy methods, Retrospective Studies, Neural Networks, Computer, Artificial Intelligence, Head and Neck Neoplasms
- Abstract
Objectives: Video laryngoscopy is an important diagnostic tool for head and neck cancers. The artificial intelligence (AI) system has been shown to monitor blind spots during esophagogastroduodenoscopy. This study aimed to test the performance of AI-driven intelligent laryngoscopy monitoring assistant (ILMA) for landmark anatomical sites identification on laryngoscopic images and videos based on a convolutional neural network (CNN)., Materials and Methods: The laryngoscopic images taken from January to December 2018 were retrospectively collected, and ILMA was developed using the CNN model of Inception-ResNet-v2 + Squeeze-and-Excitation Networks (SENet). A total of 16,000 laryngoscopic images were used for training. These were assigned to 20 landmark anatomical sites covering six major head and neck regions. In addition, the performance of ILMA in identifying anatomical sites was validated using 4000 laryngoscopic images and 25 videos provided by five other tertiary hospitals., Results: ILMA identified the 20 anatomical sites on the laryngoscopic images with a total accuracy of 97.60 %, and the average sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 100 %, 99.87 %, 97.65 %, and 99.87 %, respectively. In addition, multicenter clinical verification displayed that the accuracy of ILMA in identifying the 20 targeted anatomical sites in 25 laryngoscopic videos from five hospitals was ≥95 %., Conclusion: The proposed CNN-based ILMA model can rapidly and accurately identify the anatomical sites on laryngoscopic images. The model can reflect the coverage of anatomical regions of the head and neck by laryngoscopy, showing application potential in improving the quality of laryngoscopy., Competing Interests: Declaration of competing interest None., (Copyright © 2022 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.