7 results on '"Qiu, Jianing"'
Search Results
2. Development and validation of machine learning models to predict the need for haemostatic therapy in acute upper gastrointestinal bleeding.
- Author
-
Nazarian, Scarlet, Lo, Frank Po Wen, Qiu, Jianing, Patel, Nisha, Lo, Benny, and Ayaru, Lakshmana
- Subjects
HEMOSTATICS ,RISK assessment ,RANDOM forest algorithms ,PREDICTIVE tests ,GASTROINTESTINAL hemorrhage ,ACUTE diseases ,PREDICTION models ,HUMAN services programs ,RECEIVER operating characteristic curves ,RESEARCH funding ,EVALUATION of human services programs ,RETROSPECTIVE studies ,DESCRIPTIVE statistics ,LONGITUDINAL method ,OPERATIVE surgery ,ENDOSCOPIC gastrointestinal surgery ,INTERVENTIONAL radiology ,MACHINE learning ,COMPARATIVE studies ,CONFIDENCE intervals ,DATA analysis software ,SENSITIVITY & specificity (Statistics) - Abstract
Background: Acute upper gastrointestinal bleeding (AUGIB) is a major cause of morbidity and mortality. This presentation however is not universally high risk as only 20–30% of bleeds require urgent haemostatic therapy. Nevertheless, the current standard of care is for all patients admitted to an inpatient bed to undergo endoscopy within 24 h for risk stratification which is invasive, costly and difficult to achieve in routine clinical practice. Objectives: To develop novel non-endoscopic machine learning models for AUGIB to predict the need for haemostatic therapy by endoscopic, radiological or surgical intervention. Design: A retrospective cohort study Method: We analysed data from patients admitted with AUGIB to hospitals from 2015 to 2020 (n = 970). Machine learning models were internally validated to predict the need for haemostatic therapy. The performance of the models was compared to the Glasgow-Blatchford score (GBS) using the area under receiver operating characteristic (AUROC) curves. Results: The random forest classifier [AUROC 0.84 (0.80–0.87)] had the best performance and was superior to the GBS [AUROC 0.75 (0.72–0.78), p < 0.001] in predicting the need for haemostatic therapy in patients with AUGIB. A GBS cut-off of ⩾12 was associated with an accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of 0.74, 0.49, 0.81, 0.41 and 0.85, respectively. The Random Forrest model performed better with an accuracy, sensitivity, specificity, PPV and NPV of 0.82, 0.54, 0.90, 0.60 and 0.88, respectively. Conclusion: We developed and validated a machine learning algorithm with high accuracy and specificity in predicting the need for haemostatic therapy in AUGIB. This could be used to risk stratify high-risk patients to urgent endoscopy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Clinical value of endoscopic ultrasound sound speed in differential diagnosis of pancreatic solid lesion and prognosis of pancreatic cancer.
- Author
-
Qiu, Jianing, Li, Kangrong, Long, Xiuyan, Yu, Xiaoyu, Gong, Pan, Long, Yu, Wang, Xiaoyan, and Tian, Li
- Subjects
- *
SPEED of sound , *ENDOSCOPIC ultrasonography , *PANCREATIC cancer , *DIFFERENTIAL diagnosis , *CANCER prognosis , *PANCREATIC tumors - Abstract
Background: Differential diagnosis of pancreatic solid lesion (PSL) and prognosis of pancreatic cancer (PC) is a clinical challenge. We aimed to explore the differential diagnostic value of sound speed (SS) obtained from endoscopic ultrasonography (EUS) in PSL and the prognostic value of SS in PC. Methods: Patients with PSL in The Third Xiangya Hospital of Central South University from March 2019 to October 2019 were prospectively enrolled, who obtained SS from PSL. Patients were divided into the PC group and the pancreatic benign lesion (PBL) group. SS1 is the SS of lesions and SS2 is the SS of normal tissues adjacent to lesions. Ratio1 is equal to SS1 divided by SS2 of PSL (ratio1 = SS1/SS2). Results: Eighty patients were enrolled (24 PBL patients, 56 PC patients). SS1 and ratio1 in PC group were higher compared with PBL group (SS1:1568.00 vs. 1550.00, Z = −2.066, p = 0.039; ratio1: 1.0110 vs. 1.0051, Z = −3.391, p = 0.001). The SS1 in PC (Z = −6.503, p < 0.001) was higher compared to SS2. In the nonsurgical group of PC, low ratio1 predicted high overall survival (OS) (7.000 months vs. 4.000 months; p = 0.039). In the surgical group of PC, low SS1 was associated with low median OS (4.000 months vs. 12.000 months; p = 0.033). Conclusions: SS plays a vital role in distinguishing between PBL and PC. Higher SS1 and ratio1 obtained by EUS are more related to PC than PBL. In PC patients, high SS1 may predict pancreatic lesions. In the nonsurgical group of PC, low ratio1 may predict high OS. However, in the surgical group of PC, low SS1 may predict low OS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. The application of multimodal large language models in medicine
- Author
-
Qiu, Jianing, Yuan, Wu, and Lam, Kyle
- Published
- 2024
- Full Text
- View/download PDF
5. Foundation models: the future of surgical artificial intelligence?
- Author
-
Lam, Kyle and Qiu, Jianing
- Subjects
- *
ARTIFICIAL intelligence , *GENERATIVE pre-trained transformers , *COMPUTER-assisted image analysis (Medicine) , *ROBOTIC path planning , *COMPUTER science - Abstract
The article discusses the concept of foundation models (FMs) in surgical artificial intelligence (AI). FMs are trained on broad datasets and can be adapted to various surgical tasks, potentially revolutionizing the field of surgical AI. The article highlights the challenges in developing a surgical FM, such as the need for large volumes of data and computing power, as well as the contextual nature of surgery. It also explores potential applications of FMs in surgery, including risk prediction and anatomical segmentation. The article emphasizes the importance of validation, ethical considerations, and collaboration in the development of FMs. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
6. Dietary Assessment with Multimodal ChatGPT: A Systematic Analysis.
- Author
-
Lo FP, Qiu J, Wang Z, Chen J, Xiao B, Yuan W, Giannarou S, Frost G, and Lo B
- Abstract
Conventional approaches to dietary assessment are primarily grounded in self-reporting methods or structured interviews conducted under the supervision of dietitians. These methods, however, are often subjective, potentially inaccurate, and time-intensive. Although artificial intelligence (AI)-based solutions have been devised to automate the dietary assessment process, prior AI methodologies tackle dietary assessment in a fragmented landscape (e.g., merely recognizing food types or estimating portion size), and encounter challenges in their ability to generalize across a diverse range of food categories, dietary behaviors, and cultural contexts. Recently, the emergence of multimodal foundation models, such as GPT-4V, has exhibited transformative potential across a wide range of tasks (e.g., scene understanding and image captioning) in various research domains. These models have demonstrated remarkable generalist intelligence and accuracy, owing to their large-scale pre-training on broad datasets and substantially scaled model size. In this study, we explore the application of GPT-4V powering multimodal ChatGPT for dietary assessment, along with prompt engineering and passive monitoring techniques. We evaluated the proposed pipeline using a self-collected, semi free-living dietary intake dataset comprising 16 real-life eating episodes, captured through wearable cameras. Our findings reveal that GPT-4V excels in food detection under challenging conditions without any fine-tuning or adaptation using food-specific datasets. By guiding the model with specific language prompts (e.g., African cuisine), it shifts from recognizing common staples like rice and bread to accurately identifying regional dishes like banku and ugali. Another GPT-4V's standout feature is its contextual awareness. GPT-4V can leverage surrounding objects as scale references to deduce the portion sizes of food items, further facilitating the process of dietary assessment.
- Published
- 2024
- Full Text
- View/download PDF
7. Egocentric Image Captioning for Privacy-Preserved Passive Dietary Intake Monitoring.
- Author
-
Qiu J, Lo FP, Gu X, Jobarteh ML, Jia W, Baranowski T, Steiner-Asiedu M, Anderson AK, McCrory MA, Sazonov E, Sun M, Frost G, and Lo B
- Subjects
- Diet, Nutrition Assessment, Feeding Behavior, Privacy, Eating
- Abstract
Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviors of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this article, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning for dietary intake assessment in real-life settings.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.