18 results on '"Liu, Yuyuan"'
Search Results
2. BRAIxDet: Learning to Detect Malignant Breast Lesion with Incomplete Annotations
- Author
-
Chen, Yuanhong, Liu, Yuyuan, Wang, Chong, Elliott, Michael, Kwok, Chun Fung, Pena-Solorzano, Carlos, Tian, Yu, Liu, Fengbei, Frazer, Helen, McCarthy, Davis J., and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
Methods to detect malignant lesions from screening mammograms are usually trained with fully annotated datasets, where images are labelled with the localisation and classification of cancerous lesions. However, real-world screening mammogram datasets commonly have a subset that is fully annotated and another subset that is weakly annotated with just the global classification (i.e., without lesion localisation). Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it. The first option will reduce detection accuracy because it does not use the whole dataset, and the second option is too expensive given that the annotation needs to be done by expert radiologists. In this paper, we propose a middle-ground solution for the dilemma, which is to formulate the training as a weakly- and semi-supervised learning problem that we refer to as malignant breast lesion detection with incomplete annotations. To address this problem, our new method comprises two stages, namely: 1) pre-training a multi-view mammogram classifier with weak supervision from the whole dataset, and 2) extending the trained classifier to become a multi-view detector that is trained with semi-supervised student-teacher learning, where the training set contains fully and weakly-annotated mammograms. We provide extensive detection results on two real-world screening mammogram datasets containing incomplete annotations, and show that our proposed approach achieves state-of-the-art results in the detection of malignant breast lesions with incomplete annotations., Under Review
- Published
- 2023
3. Learning Support and Trivial Prototypes for Interpretable Image Classification
- Author
-
Wang, Chong, Liu, Yuyuan, Chen, Yuanhong, Liu, Fengbei, Tian, Yu, McCarthy, Davis J., Frazer, Helen, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Prototypical part network (ProtoPNet) methods have been designed to achieve interpretable classification by associating predictions with a set of training prototypes, which we refer to as trivial prototypes because they are trained to lie far from the classification boundary in the feature space. Note that it is possible to make an analogy between ProtoPNet and support vector machine (SVM) given that the classification from both methods relies on computing similarity with a set of training points (i.e., trivial prototypes in ProtoPNet, and support vectors in SVM). However, while trivial prototypes are located far from the classification boundary, support vectors are located close to this boundary, and we argue that this discrepancy with the well-established SVM theory can result in ProtoPNet models with inferior classification accuracy. In this paper, we aim to improve the classification of ProtoPNet with a new method to learn support prototypes that lie near the classification boundary in the feature space, as suggested by the SVM theory. In addition, we target the improvement of classification results with a new model, named ST-ProtoPNet, which exploits our support prototypes and the trivial prototypes to provide more effective classification. Experimental results on CUB-200-2011, Stanford Cars, and Stanford Dogs datasets demonstrate that ST-ProtoPNet achieves state-of-the-art classification accuracy and interpretability results. We also show that the proposed support prototypes tend to be better localised in the object of interest rather than in the background region.
- Published
- 2023
4. A Closer Look at Audio-Visual Semantic Segmentation
- Author
-
Chen, Yuanhong, Liu, Yuyuan, Wang, Hu, Liu, Fengbei, Wang, Chong, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Multimedia ,Multimedia (cs.MM) - Abstract
Audio-visual segmentation (AVS) is a complex task that involves accurately segmenting the corresponding sounding object based on audio-visual queries. Successful audio-visual learning requires two essential components: 1) an unbiased dataset with high-quality pixel-level multi-class labels, and 2) a model capable of effectively linking audio information with its corresponding visual object. However, these two requirements are only partially addressed by current methods, with training sets containing biased audio-visual data, and models that generalise poorly beyond this biased training set. In this work, we propose a new strategy to build cost-effective and relatively unbiased audio-visual semantic segmentation benchmarks. Our strategy, called Visual Post-production (VPO), explores the observation that it is not necessary to have explicit audio-visual pairs extracted from single video sources to build such benchmarks. We also refine the previously proposed AVSBench to transform it into the audio-visual semantic segmentation benchmark AVSBench-Single+. Furthermore, this paper introduces a new pixel-wise audio-visual contrastive learning method to enable a better generalisation of the model beyond the training set. We verify the validity of the VPO strategy by showing that state-of-the-art (SOTA) models trained with datasets built by matching audio and visual data from different sources or with datasets containing audio and visual data from the same video source produce almost the same accuracy. Then, using the proposed VPO benchmarks and AVSBench-Single+, we show that our method produces more accurate audio-visual semantic segmentation than SOTA models. Code and dataset will be available.
- Published
- 2023
- Full Text
- View/download PDF
5. Characterizing-water seepage damage in the chest-abdomen area of the Leshan Giant Buddha
- Author
-
Liu Yuyuan, Sun Bo, Zhang Peng, and Shen Xiwang
- Subjects
Chest abdomen ,Water seepage ,Gautama Buddha ,Geotechnical engineering ,Geology - Published
- 2021
6. Residual Pattern Learning for Pixel-wise Out-of-Distribution Detection in Semantic Segmentation
- Author
-
Liu, Yuyuan, Ding, Choubo, Tian, Yu, Pang, Guansong, Belagiannis, Vasileios, Reid, Ian, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Semantic segmentation models classify pixels into a set of known (``in-distribution'') visual classes. When deployed in an open world, the reliability of these models depends on their ability not only to classify in-distribution pixels but also to detect out-of-distribution (OoD) pixels. Historically, the poor OoD detection performance of these models has motivated the design of methods based on model re-training using synthetic training images that include OoD visual objects. Although successful, these re-trained methods have two issues: 1) their in-distribution segmentation accuracy may drop during re-training, and 2) their OoD detection accuracy does not generalise well to new contexts (e.g., country surroundings) outside the training set (e.g., city surroundings). In this paper, we mitigate these issues with: (i) a new residual pattern learning (RPL) module that assists the segmentation model to detect OoD pixels without affecting the inlier segmentation performance; and (ii) a novel context-robust contrastive learning (CoroCL) that enforces RPL to robustly detect OoD pixels among various contexts. Our approach improves by around 10\% FPR and 7\% AuPRC the previous state-of-the-art in Fishyscapes, Segment-Me-If-You-Can, and RoadAnomaly datasets. Our code is available at: https://github.com/yyliu01/RPL., 16 pages, 11 figures and it is a preprint version
- Published
- 2022
7. Knowledge Distillation to Ensemble Global and Interpretable Prototype-Based Mammogram Classification Models
- Author
-
Wang, Chong, Chen, Yuanhong, Liu, Yuyuan, Tian, Yu, Liu, Fengbei, McCarthy, Davis J., Elliott, Michael, Frazer, Helen, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
State-of-the-art (SOTA) deep learning mammogram classifiers, trained with weakly-labelled images, often rely on global models that produce predictions with limited interpretability, which is a key barrier to their successful translation into clinical practice. On the other hand, prototype-based models improve interpretability by associating predictions with training image prototypes, but they are less accurate than global models and their prototypes tend to have poor diversity. We address these two issues with the proposal of BRAIxProtoPNet++, which adds interpretability to a global model by ensembling it with a prototype-based model. BRAIxProtoPNet++ distills the knowledge of the global model when training the prototype-based model with the goal of increasing the classification accuracy of the ensemble. Moreover, we propose an approach to increase prototype diversity by guaranteeing that all prototypes are associated with different training images. Experiments on weakly-labelled private and public datasets show that BRAIxProtoPNet++ has higher classification accuracy than SOTA global and prototype-based models. Using lesion localisation to assess model interpretability, we show BRAIxProtoPNet++ is more effective than other prototype-based models and post-hoc explanation of global models. Finally, we show that the diversity of the prototypes learned by BRAIxProtoPNet++ is superior to SOTA prototype-based approaches., MICCAI 2022
- Published
- 2022
8. Unsupervised Anomaly Detection in Medical Images with a Memory-augmented Multi-level Cross-attentional Masked Autoencoder
- Author
-
Tian, Yu, Pang, Guansong, Liu, Yuyuan, Wang, Chong, Chen, Yuanhong, Liu, Fengbei, Singh, Rajvinder, Verjans, Johan W, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Unsupervised anomaly detection (UAD) aims to find anomalous images by optimising a detector using a training set that contains only normal images. UAD approaches can be based on reconstruction methods, self-supervised approaches, and Imagenet pre-trained models. Reconstruction methods, which detect anomalies from image reconstruction errors, are advantageous because they do not rely on the design of problem-specific pretext tasks needed by self-supervised approaches, and on the unreliable translation of models pre-trained from non-medical datasets. However, reconstruction methods may fail because they can have low reconstruction errors even for anomalous images. In this paper, we introduce a new reconstruction-based UAD approach that addresses this low-reconstruction error issue for anomalous images. Our UAD approach, the memory-augmented multi-level cross-attentional masked autoencoder (MemMC-MAE), is a transformer-based approach, consisting of a novel memory-augmented self-attention operator for the encoder and a new multi-level cross-attention operator for the decoder. MemMC-MAE masks large parts of the input image during its reconstruction, reducing the risk that it will produce low reconstruction errors because anomalies are likely to be masked and cannot be reconstructed. However, when the anomaly is not masked, then the normal patterns stored in the encoder's memory combined with the decoder's multi-level cross-attention will constrain the accurate reconstruction of the anomaly. We show that our method achieves SOTA anomaly detection and localisation on colonoscopy and Covid-19 Chest X-ray datasets., Technical report, 11 pages, 3 figures
- Published
- 2022
9. BoMD: Bag of Multi-label Descriptors for Noisy Chest X-ray Classification
- Author
-
Chen, Yuanhong, Liu, Fengbei, Wang, Hu, Wang, Chong, Tian, Yu, Liu, Yuyuan, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Deep learning methods have shown outstanding classification accuracy in medical imaging problems, which is largely attributed to the availability of large-scale datasets manually annotated with clean labels. However, given the high cost of such manual annotation, new medical imaging classification problems may need to rely on machine-generated noisy labels extracted from radiology reports. Indeed, many Chest X-ray (CXR) classifiers have already been modelled from datasets with noisy labels, but their training procedure is in general not robust to noisy-label samples, leading to sub-optimal models. Furthermore, CXR datasets are mostly multi-label, so current noisy-label learning methods designed for multi-class problems cannot be easily adapted. In this paper, we propose a new method designed for the noisy multi-label CXR learning, which detects and smoothly re-labels samples from the dataset, which is then used to train common multi-label classifiers. The proposed method optimises a bag of multi-label descriptors (BoMD) to promote their similarity with the semantic descriptors produced by BERT models from the multi-label image annotation. Our experiments on diverse noisy multi-label training sets and clean testing sets show that our model has state-of-the-art accuracy and robustness in many CXR multi-label classification benchmarks.
- Published
- 2022
10. Translation Consistent Semi-supervised Segmentation for 3D Medical Images
- Author
-
Liu, Yuyuan, Tian, Yu, Wang, Chong, Chen, Yuanhong, Liu, Fengbei, Belagiannis, Vasileios, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
3D medical image segmentation methods have been successful, but their dependence on large amounts of voxel-level annotated data is a disadvantage that needs to be addressed given the high cost to obtain such annotation. Semi-supervised learning (SSL) solve this issue by training models with a large unlabelled and a small labelled dataset. The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data. These perturbations usually keep the spatial input context between views fairly consistent, which may cause the model to learn segmentation patterns from the spatial input contexts instead of the segmented objects. In this paper, we introduce the Translation Consistent Co-training (TraCoCo) which is a consistency learning SSL method that perturbs the input data views by varying their spatial input context, allowing the model to learn segmentation patterns from visual objects. Furthermore, we propose the replacement of the commonly used mean squared error (MSE) semi-supervised loss by a new Cross-model confident Binary Cross entropy (CBC) loss, which improves training convergence and keeps the robustness to co-training pseudo-labelling mistakes. We also extend CutMix augmentation to 3D SSL to further improve generalisation. Our TraCoCo shows state-of-the-art results for the Left Atrium (LA) and Brain Tumor Segmentation (BRaTS19) datasets with different backbones. Our code is available at https://github.com/yyliu01/TraCoCo.
- Published
- 2022
- Full Text
- View/download PDF
11. Contrastive Transformer-based Multiple Instance Learning for Weakly Supervised Polyp Frame Detection
- Author
-
Tian, Yu, Pang, Guansong, Liu, Fengbei, Liu, Yuyuan, Wang, Chong, Chen, Yuanhong, Verjans, Johan W, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,surgical procedures, operative ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,otorhinolaryngologic diseases ,digestive system diseases - Abstract
Current polyp detection methods from colonoscopy videos use exclusively normal (i.e., healthy) training images, which i) ignore the importance of temporal information in consecutive video frames, and ii) lack knowledge about the polyps. Consequently, they often have high detection errors, especially on challenging polyp cases (e.g., small, flat, or partially visible polyps). In this work, we formulate polyp detection as a weakly-supervised anomaly detection task that uses video-level labelled training data to detect frame-level polyps. In particular, we propose a novel convolutional transformer-based multiple instance learning method designed to identify abnormal frames (i.e., frames with polyps) from anomalous videos (i.e., videos containing at least one frame with polyp). In our method, local and global temporal dependencies are seamlessly captured while we simultaneously optimise video and snippet-level anomaly scores. A contrastive snippet mining method is also proposed to enable an effective modelling of the challenging polyp cases. The resulting method achieves a detection accuracy that is substantially better than current state-of-the-art approaches on a new large-scale colonoscopy video dataset introduced in this work., Comment: MICCAI 2022 Early Accept
- Published
- 2022
- Full Text
- View/download PDF
12. Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes
- Author
-
Tian, Yu, Liu, Yuyuan, Pang, Guansong, Liu, Fengbei, Chen, Yuanhong, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition - Abstract
State-of-the-art (SOTA) anomaly segmentation approaches on complex urban driving scenes explore pixel-wise classification uncertainty learned from outlier exposure, or external reconstruction models. However, previous uncertainty approaches that directly associate high uncertainty to anomaly may sometimes lead to incorrect anomaly predictions, and external reconstruction models tend to be too inefficient for real-time self-driving embedded systems. In this paper, we propose a new anomaly segmentation method, named pixel-wise energy-biased abstention learning (PEBAL), that explores pixel-wise abstention learning (AL) with a model that learns an adaptive pixel-level anomaly class, and an energy-based model (EBM) that learns inlier pixel distribution. More specifically, PEBAL is based on a non-trivial joint training of EBM and AL, where EBM is trained to output high-energy for anomaly pixels (from outlier exposure) and AL is trained such that these high-energy pixels receive adaptive low penalty for being included to the anomaly class. We extensively evaluate PEBAL against the SOTA and show that it achieves the best performance across four benchmarks. Code is available at https://github.com/tianyu0207/PEBAL., ECCV 2022 Oral
- Published
- 2021
13. Self-supervised Multi-class Pre-training for Unsupervised Anomaly Detection and Segmentation in Medical Images
- Author
-
Tian, Yu, Liu, Fengbei, Pang, Guansong, Chen, Yuanhong, Liu, Yuyuan, Verjans, Johan W., Singh, Rajvinder, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Unsupervised anomaly detection (UAD) that requires only normal (healthy) training images is an important tool for enabling the development of medical image analysis (MIA) applications, such as disease screening, since it is often difficult to collect and annotate abnormal (or disease) images in MIA. However, heavily relying on the normal images may cause the model training to overfit the normal class. Self-supervised pre-training is an effective solution to this problem. Unfortunately, current self-supervision methods adapted from computer vision are sub-optimal for MIA applications because they do not explore MIA domain knowledge for designing the pretext tasks or the training process. In this paper, we propose a new self-supervised pre-training method for UAD designed for MIA applications, named Multi-class Strong Augmentation via Contrastive Learning (MSACL). MSACL is based on a novel optimisation to contrast normal and multiple classes of synthetised abnormal images, with each class enforced to form a tight and dense cluster in terms of Euclidean distance and cosine similarity, where abnormal images are formed by simulating a varying number of lesions of different sizes and appearance in the normal images. In the experiments, we show that our MSACL pre-training improves the accuracy of SOTA UAD methods on many MIA benchmarks using colonoscopy, fundus screening and Covid-19 Chest X-ray datasets.
- Published
- 2021
14. Detecting, Localising and Classifying Polyps from Colonoscopy Videos using Deep Learning
- Author
-
Tian, Yu, Pu, Leonardo Zorron Cheng Tao, Liu, Yuyuan, Maicas, Gabriel, Verjans, Johan W., Burt, Alastair D., Shin, Seon Ho, Singh, Rajvinder, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,digestive system diseases ,Machine Learning (cs.LG) - Abstract
In this paper, we propose and analyse a system that can automatically detect, localise and classify polyps from colonoscopy videos. The detection of frames with polyps is formulated as a few-shot anomaly classification problem, where the training set is highly imbalanced with the large majority of frames consisting of normal images and a small minority comprising frames with polyps. Colonoscopy videos may contain blurry images and frames displaying feces and water jet sprays to clean the colon -- such frames can mistakenly be detected as anomalies, so we have implemented a classifier to reject these two types of frames before polyp detection takes place. Next, given a frame containing a polyp, our method localises (with a bounding box around the polyp) and classifies it into five different classes. Furthermore, we study a method to improve the reliability and interpretability of the classification result using uncertainty estimation and classification calibration. Classification uncertainty and calibration not only help improve classification accuracy by rejecting low-confidence and high-uncertain results, but can be used by doctors to decide how to decide on the classification of a polyp. All the proposed detection, localisation and classification methods are tested using large data sets and compared with relevant baseline approaches., Preprint to submit to IEEE journals
- Published
- 2021
15. Clinical Features of Intestinal Ulcers Complicated by Epstein-Barr Virus Infection: Importance of Active Infection
- Author
-
Shuang Wu, Yajun Li, Chuan He, Tian Xinyue, Haibo Sun, Tongyu Tang, Yuqin Li, and Liu Yuyuan
- Subjects
Adult ,Male ,Medicine (General) ,medicine.medical_specialty ,Epstein-Barr Virus Infections ,Herpesvirus 4, Human ,Adolescent ,Article Subject ,Colon ,viruses ,Clinical Biochemistry ,Gastroenterology ,Virus ,03 medical and health sciences ,R5-920 ,0302 clinical medicine ,Internal medicine ,hemic and lymphatic diseases ,mental disorders ,Genetics ,medicine ,Humans ,Colonic Ulcer ,Molecular Biology ,Epstein–Barr virus infection ,Lymph node ,Aged ,Intestinal ulcers ,business.industry ,Biochemistry (medical) ,Albumin ,virus diseases ,General Medicine ,Middle Aged ,medicine.disease ,Prognosis ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Cohort ,Etiology ,030211 gastroenterology & hepatology ,Colitis, Ulcerative ,Female ,business ,psychological phenomena and processes ,Research Article - Abstract
Clinical characteristics of intestinal ulcers complicated with Epstein-Barr virus (EBV) infection remain poorly studied. This study is aimed at providing further insight into clinical features of this patient cohort. The presence of serum EBV DNA was assessed in 399 patients with colonic ulcers, of which 30 cases were positive. In EBV-positive patients, the EBV-encoded RNA (EBER) was detected in intestinal tissues of 13 patients (EBER-positive group). The test was negative in 17 patients (EBER-negative group). Acute EBV infection rate in patients with colonic ulcer was 7.52%. Age and sex differences between two groups were not statistically significant. Fever, abdominal lymph node enlargement, and crater-like gouged ulcer morphology were more common in the EBER-positive group ( P < 0.05 ). The albumin level in the EBER-positive group was significantly lower compared to that in the EBER-negative group ( P < 0.05 ). The copy count of EBV DNA in the blood of patients from the EBER-positive group was higher, and the prognosis was worse ( P < 0.05 ). Clinical manifestations were more severe in the EBER-positive group. Endoscopic, histopathological, and biochemical findings were also more serious in this group of patients. The findings point to the importance of assessing the EBER expression in patients with intestinal ulcers of various etiology. EBER positivity should be viewed as a diagnostic marker of more severe condition requiring more aggressive treatment.
- Published
- 2021
- Full Text
- View/download PDF
16. NVUM: Non-Volatile Unbiased Memory for Robust Medical Image Classification
- Author
-
Liu, Fengbei, Chen, Yuanhong, Tian, Yu, Liu, Yuyuan, Wang, Chong, Belagiannis, Vasileios, and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Real-world large-scale medical image analysis (MIA) datasets have three challenges: 1) they contain noisy-labelled samples that affect training convergence and generalisation, 2) they usually have an imbalanced distribution of samples per class, and 3) they normally comprise a multi-label problem, where samples can have multiple diagnoses. Current approaches are commonly trained to solve a subset of those problems, but we are unaware of methods that address the three problems simultaneously. In this paper, we propose a new training module called Non-Volatile Unbiased Memory (NVUM), which non-volatility stores running average of model logits for a new regularization loss on noisy multi-label problem. We further unbias the classification prediction in NVUM update for imbalanced learning problem. We run extensive experiments to evaluate NVUM on new benchmarks proposed by this paper, where training is performed on noisy multi-label imbalanced chest X-ray (CXR) training sets, formed by Chest-Xray14 and CheXpert, and the testing is performed on the clean multi-label CXR datasets OpenI and PadChest. Our method outperforms previous state-of-the-art CXR classifiers and previous methods that can deal with noisy labels on all evaluations. Our code is available at https://github.com/FBLADL/NVUM., Comment: MICCAI 2022 Early Accept
- Published
- 2021
- Full Text
- View/download PDF
17. Unsupervised Dual Adversarial Learning for Anomaly Detection in Colonoscopy Video Frames
- Author
-
Liu, Yuyuan, Tian, Yu, Maicas, Gabriel, Pu, Leonardo Z. C. T., Singh, Rajvinder, Verjans, Johan W., and Carneiro, Gustavo
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
The automatic detection of frames containing polyps from a colonoscopy video sequence is an important first step for a fully automated colonoscopy analysis tool. Typically, such detection system is built using a large annotated data set of frames with and without polyps, which is expensive to be obtained. In this paper, we introduce a new system that detects frames containing polyps as anomalies from a distribution of frames from exams that do not contain any polyps. The system is trained using a one-class training set consisting of colonoscopy frames without polyps -- such training set is considerably less expensive to obtain, compared to the 2-class data set mentioned above. During inference, the system is only able to reconstruct frames without polyps, and when it tries to reconstruct a frame with polyp, it automatically removes (i.e., photoshop) it from the frame -- the difference between the input and reconstructed frames is used to detect frames with polyps. We name our proposed model as anomaly detection generative adversarial network (ADGAN), comprising a dual GAN with two generators and two discriminators. We show that our proposed approach achieves the state-of-the-art result on this data set, compared with recently proposed anomaly detection systems., Accepted by ISBI 2020
- Published
- 2019
18. Leflunomide-induced acute liver failure: a case report
- Author
-
Liu Yuyuan and Zhang Xu-qing
- Subjects
medicine.medical_specialty ,business.industry ,Liver failure ,General Medicine ,medicine.disease ,Gastroenterology ,Surgery ,Male patient ,Internal medicine ,Rheumatoid arthritis ,medicine ,business ,Leflunomide ,medicine.drug - Abstract
A 27-year-old male patient with rheumatoid arthritis was diagnosed with acute liver failure when he was taking leflunomide, a new immunosuppressant. This case illustrates the risk that leflunomide may lead to severe hepatotoxicity.
- Published
- 2010
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.