839 results on '"Multi modality"'
Search Results
152. Chapter 3. The Challenge of Prosecuting Intracellular Protein–Protein Interactions: A Multi-modality Examination of Chemical Matter Identification, Biophysical Characterization, and Membrane Permeability
- Author
-
T. K. Sawyer, D. M. Tellers, P. R. Sheth, L. Fan, and W. T. McElroy
- Subjects
Modality (human–computer interaction) ,Membrane permeability ,Computer science ,Intracellular protein ,Identification (biology) ,Computational biology ,Small molecule ,Multi modality ,Characterization (materials science) ,Protein–protein interaction - Abstract
This review focuses on some of the latest approaches to developing small molecule, peptide, and antibody therapeutics as inhibitors of intracellular protein–protein interactions (PPIs). While distinct, each class of therapeutics relies on identifying chemical matter, accurately characterizing the interaction, optimizing the molecule for permeability, and engineering drug-like properties into the modality. It is undoubtedly an optimistic time as tremendous developments have been made across each of these areas in the last decade. This review will detail the advantages and disadvantages of these therapeutic modalities with a focus on recent advances as they relate to ligand ability and permeability.
- Published
- 2020
153. Atypical dermoid cyst of the ovary during pregnancy: A multi-modality diagnostic approach
- Author
-
Arnaldo Stanzione, Cesare Sirignano, Teresa Perillo, M. Amitrano, Valeria Romeo, Renato Cuocolo, Ernesto Soscia, Simone Maurea, Perillo, T., Romeo, V., Amitrano, M., Cuocolo, R., Stanzione, A., Sirignano, C., Soscia, E., and Maurea, S.
- Subjects
lcsh:Medical physics. Medical radiology. Nuclear medicine ,medicine.medical_specialty ,Abdominal pain ,lcsh:R895-920 ,Ovary ,Ovarian teratoma ,Multi modality ,Multimodality imaging ,030218 nuclear medicine & medical imaging ,Lesion ,03 medical and health sciences ,0302 clinical medicine ,Pregnancy ,otorhinolaryngologic diseases ,medicine ,Radiology, Nuclear Medicine and imaging ,Ovarian Teratoma ,Left ovary ,business.industry ,Dermoid cyst ,medicine.disease ,medicine.anatomical_structure ,Genitourinary ,Radiology ,medicine.symptom ,business ,030217 neurology & neurosurgery - Abstract
We report a case of a sixth-month-pregnant 37-year-old woman with abdominal pain with the presence of a dermoid cyst of the left ovary. The diagnostic work-up required a multi-modality imaging approach. In particular, US and MR examinations were initially performed but resulted with an inconclusive outcome of a final diagnosis. Hence, a CT scan was successively used to formulate lesion characterization. Thus, integrated imaging approach would be recommended. Keywords: Dermoid cyst, Ovarian teratoma, Pregnancy, Multimodality imaging
- Published
- 2020
154. P825 Multi-modality imaging for noninvasive diagnosis of transthyretin amyloid cardiomyopathy-single centre experience
- Author
-
M Gawor, J Wnuk, A Teresinska, Jacek Grzybowski, Piotr Michałek, and Magdalena Marczak
- Subjects
medicine.medical_specialty ,biology ,business.industry ,General Medicine ,Multi modality ,Transthyretin ,Single centre ,biology.protein ,medicine ,Radiology, Nuclear Medicine and imaging ,Radiology ,Cardiology and Cardiovascular Medicine ,Amyloid cardiomyopathy ,business - Abstract
Background Thransthyretin amyloidosis (ATTR) is a rare progressive disease that may present as heart failure with preserved ejection fraction, severe aortic stenosis, hypertrophic cardiomyopathy (HCM) or resctrictive cardiomyopathy. There are two types of ATTR: hereditary ATTR (hATTR) caused by mutations in the TTR gene and wild-type ATTR (wtATTR) resulting from deposition of wild-type TTR protein. Purpose We describe the clinical heterogeneity of ATTR patients from our centre diagnosed noninvasively in 2018-2019. Methods All patients presented intensive cardiac uptake at 99mTc-DPD scintigraphy. Light chain amyloidosis was excluded. Results 8 patients were diagnosed with ATTR (Table 1). Three unrelated male patients were diagnosed with hATTR due to rare mutations: 2 of them had Phe33Leu, 1 patient had Glu89Lys mutation. Five patients (males) were diagnosed with wtATTR. Age of onset differed among the patients. Characteristic clinical features included cardiomyopathy with increased left and right ventricular wall thickness. Only 2 patients had restrictive filling pattern, 3 patients had atrial fibrillation. Laboratory examination showed increased level of troponin T and NT-proBNP. Three patients had bilateral carpal tunnel syndrome. Thanks to DPD-scintygraphy we excluded ATTR in two patients with false-positive results of histological exam for TTR-related amyloid deposits. Conclusions Although ATTR is known for its broad clinical spectrum, patients from our center presented mostly as HCM phenocopies but in different stages of heart failure. Appropriate diagnosis of ATTR is crucial and have direct therapeutic impact. Echocardiography raise the suspicion of amyloid cardiomyopathy, while other imaging technique (DPD-scintigraphy) confirm it or exclude it in noninvasive way. Patient 1 2 3 4 5 6 7 8 Mutation Glu89Lys Phe33Leu Phe33Leu wild type wild type wild type wild type wild type Sex male male male male male male male male Age of onset 57 56 55 77 78 80 77 76 Electrocardiogram AF low QRS voltage low QRS voltage AF, RBBB LVH LVH pseudoinfarct pattern, low QRS voltage AF, LVH Maximal wall thickness [mm] 23 20 18 28 22 23 18 20 LVEF% 45 40 40 60 65 60 45 55 Asymmetric hypertrophy pattern + - - - + + - + NYHA III II II II III II II II NT-proBNP pg/ml 2122 1200 1500 2755 222 2630 2426 hs-Troponin T ng/l 50 98 42 65 35 63 64 Abstract P825 Figure 1
- Published
- 2020
155. 1093 Multi-modality imaging of a rare cause of constrictive physiology: pericardial ring
- Author
-
A Elsawy, A Kharabish, Dina Labib, N Thabet, Amir Anwar Samaan, M Shaaban, and H Mahmoud-Elsayed
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,medicine.medical_treatment ,Computed tomography ,General Medicine ,Multi modality ,Pericardial sac ,Constriction procedure ,Cardiac Surgery procedures ,Ultrasonographic echogenicity ,Medical imaging ,Medicine ,Radiology, Nuclear Medicine and imaging ,Radiology ,Cardiology and Cardiovascular Medicine ,business ,Pericardiectomy - Abstract
Introduction Proper pathophysiologic diagnosis of constrictive pericarditis (CP) usually mandates utilization of multiple diagnostic tools. We report a rare cause of constrictive pericarditis in a 33-year-old female presenting with painful epigastric pulsations, easy fatigability, and lower limb edema of 6 months duration. The patient had no past history of trauma, cardiac surgery, or tuberculosis. Trans-thoracic echocardiography (figure 1, panels A, parasternal long-axis view, and B, apical 4-chamber view) showed an echogenic band (arrow head) across the left (LV) and right (RV) ventricles, with compressed RV cavity. Septal bounce, with shifting of the inter-ventricular septum towards the LV during deep inspiration, was noted; however, Doppler evaluation of the diastolic function was not conclusive of constriction. Cardiac magnetic resonance (figure 1, panel C) and computed tomography with 3D segmentation, using Materialise Mimics and 3-matic software (figure 1, panel D),showed an 8-mm thick, calcified pericardial ring (arrow head) encircling and indenting both ventricles at mid-cavitary level, resulting in bi-ventricular compression and dumbbell-shaped heart. Both ventricles were of average cavity size, with preserved LV size, systolic function and mildly impaired RV systolic function. Right heart catheterization confirmed the diagnosis of constrictive pericarditis. The patient was referred for surgical pericardiectomy. Conclusion Multi-modality imaging is integral for the diagnosis of CP. Our case represents a rare etiology of constrictive physiology. Abstract 1093 Figure 1
- Published
- 2020
156. Multi-Modality Generative Adversarial Networks with Tumor Consistency Loss for Brain MR Image Synthesis
- Author
-
Yefeng Zheng, Hongen Liao, Yifan Hu, and Bingyu Xin
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,010501 environmental sciences ,Fluid-attenuated inversion recovery ,01 natural sciences ,Multi modality ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Robustness (computer science) ,medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,0105 earth and related environmental sciences ,medicine.diagnostic_test ,business.industry ,Image and Video Processing (eess.IV) ,Pattern recognition ,Magnetic resonance imaging ,Electrical Engineering and Systems Science - Image and Video Processing ,Image synthesis ,Artificial intelligence ,Mr images ,business ,Generative grammar - Abstract
Magnetic Resonance (MR) images of different modalities can provide complementary information for clinical diagnosis, but whole modalities are often costly to access. Most existing methods only focus on synthesizing missing images between two modalities, which limits their robustness and efficiency when multiple modalities are missing. To address this problem, we propose a multi-modality generative adversarial network (MGAN) to synthesize three high-quality MR modalities (FLAIR, T1 and T1ce) from one MR modality T2 simultaneously. The experimental results show that the quality of the synthesized images by our proposed methods is better than the one synthesized by the baseline model, pix2pix. Besides, for MR brain image synthesis, it is important to preserve the critical tumor information in the generated modalities, so we further introduce a multi-modality tumor consistency loss to MGAN, called TC-MGAN. We use the synthesized modalities by TC-MGAN to boost the tumor segmentation accuracy, and the results demonstrate its effectiveness., Comment: 5 pages, 3 figures, accepted to IEEE ISBI 2020
- Published
- 2020
- Full Text
- View/download PDF
157. Multi-modality Information Fusion for Radiomics-Based Neural Architecture Search
- Author
-
Jinman Kim, Dagan Feng, Yige Peng, Lei Bi, and Michael J. Fulham
- Subjects
0301 basic medicine ,medicine.diagnostic_test ,business.industry ,Computer science ,Radiography ,Computed tomography ,Pattern recognition ,Convolutional neural network ,Multi modality ,030218 nuclear medicine & medical imaging ,Imaging modalities ,03 medical and health sciences ,Information fusion ,030104 developmental biology ,0302 clinical medicine ,Radiomics ,Positron emission tomography ,medicine ,Artificial intelligence ,business - Abstract
‘Radiomics’ is a method that extracts mineable quantitative features from radiographic images. These features can then be used to determine prognosis, for example, predicting the development of distant metastases (DM). Existing radiomics methods, however, require complex manual effort including the design of hand-crafted radiomic features and their extraction and selection. Recent radiomics methods, based on convolutional neural networks (CNNs), also require manual input in network architecture design and hyper-parameter tuning. Radiomic complexity is further compounded when there are multiple imaging modalities, for example, combined positron emission tomography - computed tomography (PET-CT) where there is functional information from PET and complementary anatomical localization information from computed tomography (CT). Existing multi-modality radiomics methods manually fuse the data that are extracted separately. Reliance on manual fusion often results in sub-optimal fusion because they are dependent on an ‘expert’s’ understanding of medical images. In this study, we propose a multi-modality neural architecture search method (MM-NAS) to automatically derive optimal multi-modality image features for radiomics and thus negate the dependence on a manual process. We evaluated our MM-NAS on the ability to predict DM using a public PET-CT dataset of patients with soft-tissue sarcomas (STSs). Our results show that our MM-NAS had a higher prediction accuracy when compared to state-of-the-art radiomics methods.
- Published
- 2020
158. Non-Rigid Registration of Multi-Modality Medical Image Using Combined Gradient Information and Mutual Information
- Author
-
Wen-long Li, Zhouping Yin, Fan Rao, and Xuming Zhang
- Subjects
business.industry ,Computer science ,Health Informatics ,Mutual information ,01 natural sciences ,Multi modality ,010305 fluids & plasmas ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,0103 physical sciences ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Artificial intelligence ,business - Published
- 2018
159. Multi-Level Multi-Modality Fusion Radiomics: Application to PET and CT Imaging for Prognostication of Head and Neck Cancer
- Author
-
Wenbing Lv, Jianhua Ma, Arman Rahmim, Saeed Ashrafinia, and Lijun Lu
- Subjects
Adult ,Male ,Adolescent ,Computer science ,Feature extraction ,Context (language use) ,Multi modality ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Health Information Management ,Radiomics ,Positron Emission Tomography Computed Tomography ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Electrical and Electronic Engineering ,Aged ,Aged, 80 and over ,Fusion ,business.industry ,Head and neck cancer ,Pattern recognition ,Middle Aged ,medicine.disease ,Prognosis ,Computer Science Applications ,Feature (computer vision) ,Head and Neck Neoplasms ,030220 oncology & carcinogenesis ,Female ,Artificial intelligence ,Ct imaging ,business ,Head ,Neck ,Biotechnology - Abstract
To characterize intra-tumor heterogeneity comprehensively, we propose a multi-level fusion strategy to combine PET and CT information at the image-, matrix-and feature-levels towards improved prognosis. Specifically, we developed fusion radiomics in the context of 3 prognostic outcomes in a multi-center setting (4 centers) involving 296 head & neck cancer patients. Eight clinical parameters were first utilized to build a (1) clinical model. We also built models by extracting 127 radiomics features from (2) PET images alone; (3–8) PET and CT images fused via wavelet-based fusion (WF) using CT-weights of 0.2, 0.4, 0.6 and 0.8, gradient transfer fusion (GTF), and guided filtering-based fusion (GFF); (9) fused matrices (sumMat); (10–11) fused features constructed via feature averaging (avgFea) and feature concatenation (conFea); and finally, (12) CT images alone; above models were also expanded to include both clinical and radiomics features. Seven variations of training and testing partitions were investigated. Highest performance in 5, 6 and 5 partitions was achieved by image-level fusion strategies for RFS, MFS and OS prediction, respectively. Among all partitions, WF0.6 and WF0.8 showed significantly higher performance than CT model for RFS (C-index: 0.60 ± 0.04 vs. 0.56 ± 0.03, p-value: 0.015) and MFS (C-index: 0.71 ± 0.13 vs. 0.62 ± 0.08, p-value: 0.020) predictions, respectively. In partition CER 23 vs. 14, WF0.6 significantly outperformed Clinical model for RFS prediction (C-index: 0.67 vs. 0.53, p-value: 0.003); both avgFea and WF0.6 showed C-index of 0.64 and significantly higher than that of PET only (C-index: 0.51, p-value: 0.018 and 0.031, respectively) for OS prediction. Fusion radiomics modeling showed varying improvements compared to single modality models for different outcome predictions in different partitions, highlighting the importance of generalizing radiomics models. Image-level fusion holds potential to capture more useful characteristics.
- Published
- 2019
160. P05 Implementation of a multi-modality education program in australia
- Author
-
M Wills, M Pigott, D Detering, and M Nolte
- Subjects
Advance care planning ,Medical education ,Suite ,Facilitator ,Learning Management ,Context (language use) ,Online evaluation ,Psychology ,Multi modality ,Dreyfus model of skill acquisition - Abstract
Background Despite the acknowledged importance of advance care planning (ACP), a barrier to uptake is appropriately skilled clinicians to have conversations. A literature review showed limited availability of education resources that considered the Australian context. This implementation project involved the development of education resources that facilitated scaffolding of learning, from novice to expert, and included online modules, clinician workshops and train the trainer workshops as part of a standardised program of education in ACP. Methods Nine online modules, education resources to enable clinicians to practice ACP discussions in workshops and training for facilitators to implement their own workshops were developed. Results From July 2017 to June 2018, 2656 people were registered on the learning management site and 1541 completed at least one online module. Feedback from the online evaluation identified that 99% of 4262 people rated their likelihood of recommending the module to colleagues as ≥5 out of 10. Seventy percent of the 144 people who attended the clinician workshops in the 12-month period specifically identified communication with patients and colleagues as the key area of learning for implementation. From 16 people who attended the two facilitator’s workshops 6 have accessed the education resources and facilitated their own workshops. Conclusion This program considered the implementation of ACP education using a framework for learners to scaffold their knowledge. The suite of education resources provides a sustainable program of education by encouraging development of skills to the expert facilitator level. There is clearly a demand and interest in multi-modality learning.
- Published
- 2019
161. Fiducial markers visibility and artefacts in prostate cancer radiotherapy multi-modality imaging
- Author
-
Raymond B. King, Sarah O.S. Osman, Cormac McGrath, Emily Russell, Kevin M. Prise, Alan R. Hounsell, Conor K. McGarry, Suneil Jain, and Karen Crowther
- Subjects
Male ,Organs at Risk ,medicine.medical_treatment ,Pelvic phantom ,Signal-To-Noise Ratio ,Artefacts ,Multimodal Imaging ,030218 nuclear medicine & medical imaging ,Management of prostate cancer ,Prostate cancer ,0302 clinical medicine ,Image Processing, Computer-Assisted ,Phantoms, Imaging ,Prostate Cancer ,Visibility (geometry) ,Radiotherapy Dosage ,Cone-Beam Computed Tomography ,lcsh:Neoplasms. Tumors. Oncology. Including cancer and carcinogens ,Magnetic Resonance Imaging ,Oncology ,030220 oncology & carcinogenesis ,Artifacts ,lcsh:Medical physics. Medical radiology. Nuclear medicine ,lcsh:R895-920 ,lcsh:RC254-282 ,Imaging phantom ,Multi modality ,03 medical and health sciences ,SDG 3 - Good Health and Well-being ,Fiducial Markers ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,IGRT ,Image-guided radiation therapy ,business.industry ,Research ,Radiotherapy Planning, Computer-Assisted ,Prostatic Neoplasms ,medicine.disease ,body regions ,Radiation therapy ,Multi-modality imaging ,Gold ,Radiotherapy, Intensity-Modulated ,Nuclear medicine ,business ,Fiducial marker ,Radiotherapy, Image-Guided - Abstract
Background In this study, a novel pelvic phantom was developed and used to assess the visibility and presence of artefacts from different types of commercial fiducial markers (FMs) on multi-modality imaging relevant to prostate cancer. Methods and materials The phantom was designed with 3D printed hollow cubes in the centre. These cubes were filled with gel to mimic the prostate gland and two parallel PVC rods were used to mimic bones in the pelvic region. Each cube was filled with gelatine and three unique FMs were positioned with a clinically-relevant spatial distribution. The FMs investigated were; Gold Marker (GM) CIVCO, GM RiverPoint, GM Gold Anchor (GA) line and ball shape, and polymer marker (PM) from CIVCO. The phantom was scanned using several imaging modalities typically used to image prostate cancer patients; MRI, CT, CBCT, planar kV-pair, ExacTrac, 6MV, 2.5MV and integrated EPID imaging. The visibility of the markers and any observed artefacts in the phantom were compared to in-vivo scans of prostate cancer patients with FMs. Results All GMs were visible in volumetric scans, however, they also had the most visible artefacts on CT and CBCT scans, with the magnitude of artefacts increasing with FM size. PM FMs had the least visible artefacts in volumetric scans but they were not visible on portal images and had poor visibility on lateral kV images. The smallest diameter GMs (GA) were the most difficult GMs to identify on lateral kV images. Conclusion The choice between different FMs is also dependent on the adopted IGRT strategy. PM was found to be superior to investigated gold markers in the most commonly used modalities in the management of prostate cancer; CT, CBCT and MRI imaging.
- Published
- 2019
162. AM2FNet: Attention-based Multiscale & Multi-modality Fused Network
- Author
-
Zhiyong Huang, Rong Chen, and Yuanlong Yu
- Subjects
Computer science ,business.industry ,Existential quantification ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Multi modality ,Image (mathematics) ,Semantic mapping ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,Fuse (electrical) ,RGB color model ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,FNET ,0105 earth and related environmental sciences - Abstract
How to infer the 3D geometries and 3D semantic labels for each unit in a scene, including visible surfaces and occluded parts, is an important issue in many robotic fields. In recent years, there exists some studies on segmenting and completing 3D scene from 2D information. Most of them complete a scene from a single depth image. Compared with the depth image, the RGB image contains more color features and contour features, which can help to semantic labeling. However, how to design an effective strategy to fuse RGB and depth features is a challenge issue. Our paper presents an attention-based multi-scale & multi-modality fused network, called AM2FNet, which includes six modules: depth feature module, color feature module, 3D integration module for multi-modality feature fusion, 3D refinement module for multi-scale feature fusion, attention modules, semantic mapping module. The integration module and the refinement module work together in 3D space to fuse color and depth features at low-level, middle-level and high-level in a top-down fashion. In addition, we use an attention module to efficiently bias input-related features. Experimental results show that our proposed network can generate higher-quality semantic scene completion (SSC) results and scene completion (SC) results, and outperforms the state-of-the-art methods on real NYU and synthetic NYUCAD datasets. Meanwhilethe contribions of single modules have been illustrated.
- Published
- 2019
163. Multi-Task Cross-Modality Deep Learning for Pedestrian Risk Estimation
- Author
-
Pop, Danut Ovidiu, Babes-Bolyai University [Cluj-Napoca] (UBB), Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), Institut National des Sciences Appliquées (INSA)-Normandie Université (NU), Robotics & Intelligent Transportation Systems (RITS), Inria de Paris, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Normandie Université, Babeş-Bolyai Universite, Abdelaziz Bensrhair, Fawzi Nashahibi, Horia F Pop, Laboratoire d'Informatique, de Traitement de l'Information et des Systèmes (LITIS), Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Université Le Havre Normandie (ULH), Normandie Université (NU), Universitatea Babeș-Bolyai (Cluj-Napoca, Roumanie), Horia Florin Pop, and Fawzi Nashashibi
- Subjects
Reconnaissance des actions des piétons ,Apprentissage en profondeur ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,Estimated ,Multi-modalité ,Pedestrian Action Recognition ,Prédiction d'action ,Pedestrian detection ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Action Prediction ,Deep Learning ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,Temps de traversée ,Multi Modality ,Détection des piétons ,Time to Cross ,[INFO]Computer Science [cs] ,Estimation - Abstract
This Ph.D. thesis is the result of my research work in the machine learning (particularly in Deep Learning), image processing and intelligent transportation fields for solving the problem of multi-task pedestrian protection system (PPS) including not only pedestrian classification, detection and tracking, but also pedestrian action-unit classification and prediction, and finally pedestrian risk estimation.The goal of our research work is to develop an intelligent pedestrian protection component based only on single stereo vision system using an optimal cross-modality deep learning architecture in order to to classify the current pedestrian action, predict their next actions and finally to estimate the pedestrian risk by the time to cross for each pedestrian.First, we investigate the classification component where we analyzed how learning representations from one modality would enable recognition for other modalitie(s) within various deep learning, which one terms as cross-modality learning. Second, we study how the cross modality learning improve an end-to-end the pedestrian action detection.Third, we analyze the pedestrian action prediction and the estimation of time to cross.; Cette thèse est le résultat de mes travaux de recherche dans les domaines de l'apprentissage automatique (en particulier en Deep Learning), du traitement d'images et du transport intelligent pour résoudre le problème du système de protection des piétons (PPS) multi-tâches, y compris non seulement la classification, la détection et le suivi des piétons, mais également la classification et la prévision des unités d'action pour les piétons, et enfin l'estimation du risque pour les piétons.L'objectif de nos travaux de recherche est de développer un composant de protection des piétons intelligent basé uniquement sur un système de vision stéréo unique utilisant une architecture d'apprentissage profond cross-modalité optimale afin de classer l'action piétonne actuelle, de prédire leurs prochaines actions et enfin d'estimer le piéton risque au moment de traverser pour chaque piéton.Premièrement, nous étudions la composante de classification où nous avons analysé comment les représentations d'apprentissage d'une modalité permettraient de reconnaître d'autres modalités dans le cadre de l'apprentissage en profondeur, que l'on qualifie d'apprentissage multimodal.Deuxièmement, nous étudions comment l'apprentissage inter-modalité améliore la détection de l'action piétonne de bout en bout.Troisièmement, nous analysons la prédiction de l'action des piétons et l'estimation du temps à traverser.
- Published
- 2019
164. Detect depression from communication: how computer vision, signal processing, and sentiment analysis join forces
- Author
-
Yan Jin, Xiangyu Chang, Shuai Huang, Zhangyang Wang, and Aven Samareh
- Subjects
Signal processing ,business.industry ,Computer science ,Sentiment analysis ,Public Health, Environmental and Occupational Health ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Multi modality ,03 medical and health sciences ,0302 clinical medicine ,Consistency (negotiation) ,Depression (economics) ,0202 electrical engineering, electronic engineering, information engineering ,Criticism ,Join (sigma algebra) ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,business ,Safety Research ,computer ,030217 neurology & neurosurgery ,Natural language processing - Abstract
Background: Depression is a common illness worldwide. Traditional procedures have generated controversy and criticism, such as accuracy and agreement on consistency of depression diagnosis and asse...
- Published
- 2018
165. Multi-modality Depression Detection via Multi-scale Temporal Dilated CNNs
- Author
-
Xiaofen Xing, Weirui Lu, Bolun Cai, Zhiwei He, and Weiquan Fan
- Subjects
Concordance correlation coefficient ,Computer science ,business.industry ,Test set ,Audio visual ,Pattern recognition ,Artificial intelligence ,business ,Statistical function ,Encoder ,Multi modality - Abstract
Depression, a prevalent mental illness, is negatively impacting on individual and society. This paper targets the Depression Detection Challenge with AI Sub-challenge (DDS) task of Audio Visual Emotion Challenge (AVEC) 2019. Firstly, two task-specific features are proposed: 1) deep contextual text features, which incorporate global text features and sentiment scores estimated by fine-tuned Bidirectional Encoder Representations from Transformers (BERT); 2) span-wise dense temporal statistical features, in which multiple statistical functions are conducted in each continuous time span. Furthermore, we propose a multi-scale temporal dilated CNN to precisely capture the hidden temporal dependency in the data for automatic multi-modality depression detection. Our proposed framework achieves competitive performance with Concordance Correlation Coefficient (CCC) of 0.466 on development set and 0.430 on test set which is remarkably higher than the baseline result of 0.269 on development set and 0.120 on test set.
- Published
- 2019
166. All fiber optic multi-modality imaging endo-speckle-fluoroscope for disease diagnosis in body cavities
- Author
-
Sujatha Narayanan Unni, Murukeshan Vadakke Matham, and School of Mechanical and Aerospace Engineering
- Subjects
Engineering ,Speckle pattern ,All fiber ,business.industry ,Fluoroscope ,Engineering::Bioengineering [DRNTU] ,business ,Multi modality ,Biomedical engineering - Abstract
The thesis contains the detailed research work carried out on various theoretical aspects regarding the interaction between the proposed endo-speckle fluoroscope and the curved colon surface during different modes of diagnosis. DOCTOR OF PHILOSOPHY (MPE)
- Published
- 2019
167. Multi-Modality Human Action Recognition
- Author
-
Yu Zhu
- Subjects
Cognitive science ,business.industry ,Action recognition ,Artificial intelligence ,business ,Psychology ,Multi modality - Published
- 2019
168. P2243CTCA alone demonstrates superior diagnostic accuracy, prognostic utility and is less expensive than CTCA combined with subsequent multi-modality functional imaging in patients with ischaemic symptoms
- Author
-
Edd Maclean, Edward D. Nicol, T Ngee, Joban Sehmi, and Gajen Kanaganayagam
- Subjects
medicine.medical_specialty ,business.industry ,medicine.medical_treatment ,Ischemia ,Revascularization ,medicine.disease ,Multi modality ,Functional imaging ,medicine ,Medical imaging ,Stress Echocardiography ,In patient ,Radiology ,Cardiology and Cardiovascular Medicine ,business ,Perfusion - Abstract
Background The evaluation of suspected ischaemic symptoms incorporates multi-modality anatomical and functional imaging tests. The 2016 update to the UK's NICE guidelines recommends CT coronary angiogram (CTCA) first line in patients without known coronary artery disease. Additive multi-modality functional imaging may provide synergistic diagnostic and prognostic information. Purpose To investigate the diagnostic accuracy, prognostic utility and cost of CTCA combined with subsequent multi-modality functional testing versus (vs) CTCA alone. Methods 772 consecutive patients were referred to a single UK tertiary centre with symptoms suggestive of ischaemia. 657 individuals (“CTCA group”) underwent CTCA alone, and 115 individuals (“Combined group”) underwent CTCA and then either perfusion cardiac MRI (n=25), stress echocardiogram (n=16), or myocardial perfusion scintigraphy (n=74). Patients underwent invasive angiography (n=79) +/− revascularisation at the discretion of the referring clinician. All readers and operators were aware of previous imaging findings. Revascularised patients (n=52) were excluded from long term follow-up. The remaining patients were followed-up for a mean of 38.1±17.4 months and the incidence of major adverse cardiovascular events (MACE) recorded. Costs were derived from the NICE guidelines. Results Baseline characteristics were similar between groups. The Combined group underwent significantly more invasive angiograms than the CTCA group (29.6% vs 6.8%, p=0.0001) with no significant difference in the rate of revascularisation (73% vs 67%, p=0.72). Mean time from CTCA to angiogram was significantly longer in the Combined group (81.2 vs 38.1 days, p=0.0001). Both sensitivity and specificity were lower in the Combined group than in the CTCA group (sensitivity: 70% vs 93%, specificity: 75% vs 100%). The rate of long term MACE was significantly higher in the Combined group (8.7% vs 2.6%, p=0.0026). Multivariate analysis of CTCA and functional imaging findings found that CTCA-derived four vessel aggregate stenosis score (0–12) was the strongest predictor of MACE for the whole cohort (OR 4.4, p CTCA and functional data vs outcomes Conclusions Combining multi-modality functional testing with CTCA increased costs but did not improve diagnostic accuracy or long term outcomes. Further reductions in both MACE and unnecessary invasive angiography are desirable; CT-derived functional data such as FFRCT may be implicated.
- Published
- 2019
169. Multi-modality 3D mandibular resection planning in head and neck cancer using CT and MRI data fusion
- Author
-
Max J. H. Witjes, Joep Kraeima, Rutger H. Schepers, Roel J H M Steenbakkers, K.P. Schepman, Haydar Aslan Gülbitti, Jan L. N. Roodenburg, Bart Dorgelo, Fred K L Spijkervet, Damage and Repair in Cancer Development and Cancer Treatment (DARE), Guided Treatment in Optimal Selected Cancer Patients (GUTS), Translational Immunology Groningen (TRIGR), and Basic and Translational Research and Imaging Methodology Development in Groningen (BRIDGE)
- Subjects
Male ,Cancer Research ,medicine.medical_specialty ,3D planning ,SURGERY ,ACCURACY ,Mandible ,Surgical planning ,Multimodal Imaging ,Multi modality ,PLATES ,Resection ,03 medical and health sciences ,Oncologic margins ,0302 clinical medicine ,MRI scan ,Medicine ,Humans ,University medical ,MARGINS ,Aged ,business.industry ,Head and neck cancer ,CAD/CAM RECONSTRUCTION ,3D imaging computer generated ,030206 dentistry ,Middle Aged ,Data fusion ,medicine.disease ,Mandibular resection ,Magnetic Resonance Imaging ,TUMORS ,VSP ,Oncology ,Tumour visualisation ,Head and Neck Neoplasms ,030220 oncology & carcinogenesis ,Cohort ,REGISTRATION ,Female ,Radiology ,SQUAMOUS-CELL CARCINOMA ,Oral Surgery ,business ,Tomography, X-Ray Computed ,IMAGE DATA FUSION ,3D image - Abstract
Objectives: 3D virtual surgical planning (VSP) and guided surgery has been proven to be an effective tool for resection and reconstruction of the mandible. Currently, most widely used 3D VSP approaches to mandibular resection do not include detailed tumour information in the VSP. This manuscript presents a strategy where the aim was to incorporate tumour visualisation into the 3D virtual plan. Three-dimensional VSP of the mandibular resections was based on the fusion of CT and MRI data which was subsequently applied in clinical practice.Methods: All patients diagnosed with oral squamous cell carcinoma between 2014 and 2017 at the University Medical Centre Groningen were included. The tumour was delineated on the MRI data, after which this dataset was fused with the CT bone data in order to construct a 3D bone and tumour model for virtual resection planning. Guided resections were performed and post-operative evaluation quantified the accuracy of the resection. The histopathological findings and patient and tumour characteristics were compared to those of a historical cohort (2009-2014) of conventional mandibular continuity resections.Results: Twenty-four patients were included in the cohort. The average deviation from planned resection was found to be 2.2 mm. Histopathologic analysis confirmed all resection planes (bone) were tumour free, compared to 96.4% in the historic cohort.Conclusion: MRI-CT base tumour visualisation and 3D resection planning is a safe and accurate method for oncologic resection of the mandible. It is an improvement on the current methods reported for 3D resection planning based solely on CT data.
- Published
- 2018
170. Imaging of acute unilateral limb swelling: A multi modality overview
- Author
-
Ahmed A. Baz and Talaat A. Hassan
- Subjects
lcsh:Medical physics. Medical radiology. Nuclear medicine ,medicine.medical_specialty ,DVT-imaging ,business.industry ,lcsh:R895-920 ,Unilateral ,Acute ,Limb swelling ,Work-up ,Multi modality ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,Edema ,Medicine ,Radiology, Nuclear Medicine and imaging ,Radiology ,Differential diagnosis ,business ,Pathological - Abstract
Acute unilateral limb swelling is a common clinical problem that has a relatively wide differential diagnosis both by clinical and imaging studies, nevertheless, the diagnostic list may include conditions that have quite different treatment plans, this mandates early establishment of the correct diagnosis. This review gives a multimodality approach in imaging of unilateral limb swelling with an onset not exceeding 72 h. The most common etiologies for the condition were tabulated regarding the anatomical level from which they arise, aiming to give a step wise imaging work up overviewing the imaging characteristics for each pathological condition.
- Published
- 2018
171. Intelligent analysis of brain images
- Author
-
Qi Zhu, Jiashuang Huang, Wei Shao, Shuo Huang, Xiaoke Hao, Mingliang Wang, and Daoqiang Zhang
- Subjects
Brain network ,Image fusion ,General Computer Science ,Computer science ,business.industry ,Image registration ,Brain research ,Multi modality ,Brain disease ,Neuroimaging ,Computer vision ,Artificial intelligence ,business ,Engineering (miscellaneous) - Abstract
In recent years, the brain research project has aroused considerable public and governmental attention. Brain imaging technique is an important tool for brain science research, and determining an efficient and effective way to analyze the high-dimensional, multi-modality, heterogenous, and time-variant brain images has become a new hotspot in brain science. In this paper, we first introduce some fundamental methods of brain image analysis, and thereafter review some of our proposed methods in the fields of multi-modal image fusion, brain network construction and analysis, image genomic association analysis, and brain image registration by applying machine learning techniques, especially in the fields of early diagnosis of brain disease and brain decoding.
- Published
- 2018
172. M3L: Multi-modality mining for metric learning in person re-Identification
- Author
-
Jie Wang, Xiaokai Liu, Hongyu Wang, and Xiaorui Ma
- Subjects
Modality (human–computer interaction) ,Property (programming) ,business.industry ,Feature vector ,020206 networking & telecommunications ,02 engineering and technology ,Space (commercial competition) ,Machine learning ,computer.software_genre ,Multi modality ,Linear map ,Artificial Intelligence ,Signal Processing ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Baseline (configuration management) ,business ,computer ,Software ,Mathematics - Abstract
Learning a scene-specific distance metric from labeled data is critical for person re-identification. Most of the earlier works in this area aim to seek a linear transformation of the feature space such that relevant dimensions are emphasized while irrelevant ones are discarded in a global sense. However, when training data exhibit multi-modality transitions, the globally learned metric would deviate from the correct metrics learned from each modality. In this study, we propose a multi-modality mining approach for metric learning (M 3 L) to automatically discover multiple modalities of illumination changes by exploring the shift-invariant property in log-chromaticity space, and then learn a sub-metric for each modality to maximally reduce the bias derived from metric learning model with global sense. The experiments on the challenging VIPeR dataset and the fusion dataset VIPeR&PRID 450S have validated the effectiveness of the proposed method with an average improvement of 2–7% over original baseline methods.
- Published
- 2018
173. MMA: a multi-view and multi-modality benchmark dataset for human action recognition
- Author
-
Yanbing Xue, Hua Zhang, Tao-tao Han, Zan Gao, and Guangping Xu
- Subjects
Biometrics ,Computer Networks and Communications ,business.industry ,Computer science ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Multi modality ,Variety (cybernetics) ,Action (philosophy) ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Feature (machine learning) ,Benchmark (computing) ,Action recognition ,020201 artificial intelligence & image processing ,Artificial intelligence ,Baseline (configuration management) ,business ,computer ,Software - Abstract
Human action recognition is an active research topic in both computer vision and machine learning communities, which has broad applications including surveillance, biometrics and human computer interaction. In the past decades, although some famous action datasets have been released, there still exist limitations, including the limited action categories and samples, camera views and variety of scenarios. Moreover, most of them are designed for a subset of the learning problems, such as single-view learning problem, cross-view learning problem and multi-task learning problem. In this paper, we introduce a multi-view, multi-modality benchmark dataset for human action recognition (abbreviated to MMA). MMA consists of 7080 action samples from 25 action categories, including 15 single-subject actions and 10 double-subject interactive actions in three views of two different scenarios. Further, we systematically benchmark the state-of-the-art approaches on MMA with respective to all three learning problems by different temporal-spatial feature representations. Experimental results demonstrate that MMA is challenging on all three learning problems due to significant intra-class variations, occlusion issues, views and scene variations, and multiple similar action categories. Meanwhile, we provide the baseline for the evaluation of existing state-of-the-art algorithms.
- Published
- 2018
174. Multi-modality medical image fusion based on image decomposition framework and nonsubsampled shearlet transform
- Author
-
Huiqian Du, Wenbo Mei, and Xingbin Liu
- Subjects
Image fusion ,Fusion ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,020206 networking & telecommunications ,Health Informatics ,Pattern recognition ,02 engineering and technology ,Multi modality ,Image (mathematics) ,Moving frame ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Fuse (electrical) ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Shearlet transform ,Mathematics - Abstract
Medical image fusion increases accuracy of clinical diagnosis and analysis through integrating complementary information of multi-modality medical images. A novel multi-modality medical image fusion algorithm exploiting a moving frame based decomposition framework (MFDF) and the nonsubsampled shearlet transform (NSST) is proposed. The MFDF is applied to decompose source images into texture components and approximation components. Maximum selection fusion rule is employed to fuse texture components aimed at transferring salient gradient information to the fused image. The approximate components are merged using NSST. Finally, a components synthesis process is adopted to produce the fused image. Experimental results verify that the proposed method achieves better performance than other compared state-of-art methods in both visual effects and objective criteria.
- Published
- 2018
175. Multi-modality imaging: Bird’s eye view from the 2017 American Heart Association Scientific Sessions
- Author
-
Steven G. Lloyd, Fadi G. Hage, and Wael AlJaroudi
- Subjects
medicine.medical_specialty ,Cardiac computed tomography ,Association (object-oriented programming) ,Cardiology ,030204 cardiovascular system & hematology ,Multimodal Imaging ,Risk Assessment ,Multi modality ,030218 nuclear medicine & medical imaging ,Imaging modalities ,03 medical and health sciences ,0302 clinical medicine ,Coronary Circulation ,medicine ,Animals ,Humans ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Radionuclide Imaging ,Chicago ,medicine.diagnostic_test ,business.industry ,Heart ,Magnetic resonance imaging ,American Heart Association ,Congresses as Topic ,United States ,Echocardiography ,Nuclear Medicine ,Tomography, X-Ray Computed ,Cardiology and Cardiovascular Medicine ,Cardiac magnetic resonance ,business - Abstract
This review summarizes key imaging studies that were presented in the American Heart Association Scientific Sessions 2017 related to the fields of nuclear cardiology, cardiac computed tomography, cardiac magnetic resonance, and echocardiography. The aim of this bird's eye view is to inform readers about multiple studies reported at the meeting from these different imaging modalities. While such a review is most useful for those that did not attend the conference, we find that a general overview may also be useful to those that did since it is often difficult to get exposure to many abstracts at large meetings. The review, therefore, aims to help readers stay updated on the newest imaging studies presented at the meeting and will hopefully stimulate new ideas for future research in imaging.
- Published
- 2018
176. Sharing values to safeguard the future: British Holocaust Memorial Day commemoration as epideictic rhetoric
- Author
-
John Richardson
- Subjects
060201 languages & linguistics ,Literature ,Linguistics and Language ,business.industry ,Communication ,media_common.quotation_subject ,05 social sciences ,Media studies ,050801 communication & media studies ,06 humanities and the arts ,Art ,Ceremony ,Multi modality ,Epideictic ,Multimodality ,0508 media and communications ,The Holocaust ,0602 languages and literature ,Mediation ,Rhetoric ,business ,media_common - Abstract
This article explores the rhetoric, and mass mediation, of the national Holocaust Memorial Day (HMD) commemoration ceremony, as broadcast on British television. I argue that the televised national ceremonies should be approached as an example of multi-genre epideictic rhetoric, working up meanings through a hybrid combination of genres (speeches, poems, readings), author/animators and modes (speech, music, light, movement and silence). Epideictic rhetoric has often been depreciated as simply ceremonial ‘praise or blame’ speeches. However, given that the topics of praise/blame assume the existence of social norms, epideictic also acts to presuppose and evoke common values, in general, and a collective recognition of shared social responsibilities, in particular. My methodology draws on the Discourse-Historical Approach to Critical Discourse Analysis, given, first, its central prominence in analysing argumentative strategies in discourse and, second, the ways it facilitates a reflexive ‘shuttling’ between text-discursive features, intertextual relations and wider contexts of society and history. Here, I examine how a catastrophic past is invoked in speech and evoked through image and music, in response to the demands that uncertainty of the future ‘places upon one’s conscience’.
- Published
- 2018
177. Multi-modality weakly labeled sentiment learning based on Explicit Emotion Signal for Chinese microblog
- Author
-
Yanping Lv, Xiao Ke, Dazhen Lin, Lingxiao Li, and Donglin Cao
- Subjects
Social network ,Microblogging ,business.industry ,Computer science ,Cognitive Neuroscience ,SIGNAL (programming language) ,Sentiment analysis ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Multi modality ,Computer Science Applications ,Task (project management) ,Domain (software engineering) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Social media ,Artificial intelligence ,business ,computer ,Semantic gap - Abstract
Understanding the sentiments of users from cross media contents which contain texts and images is an important task for many social network applications. However, due to the semantic gap between cross media features and sentiments, machine learning methods need a lot of human labeled samples. Furthermore, for each kind of media content, it is necessary to constantly add a lot of new human labeled samples because of new expressions of sentiments. Fortunately, there are some emotion signals, like emoticons, which denote users’ emotions in cross media contents. In order to use these weakly labels to build a unified multi-modality sentiment learning framework, we propose an Explicit Emotion Signal (EES) based multi-modality sentiment learning approach which uses huge number of weakly labeled samples in sentiment learning. There are three advantages in our approach. Firstly, only a few human labeled samples are needed to reach the same performance which can be obtained by the traditional machine learning based sentiment prediction approaches. Secondly, this approach is flexible and can easily combine text and vision based sentiment learning through deep neural networks. Thirdly, because a lot of weakly labeled samples can be used in EES, trained model is more robust in different domain transfer. In this paper, firstly, we investigate the correlation between sentiments and emoticons and choose emoticons as the Explicit Emotion Signals in our approach; secondly, we build a two stages multi-modality sentiment learning framework based on Explicit Emotion Signals. Our experiment results show that our approach not only achieves the best performance but also only needs 3% and 43% training samples to obtain the same performance of Visual Geometry Group (VGG) model and Long Short-Term Memory (LSTM) model in images and texts, respectively.
- Published
- 2018
178. Innovative multi-modality imaging to assess paravalvular leak
- Author
-
Martin Kloeckner, Sébastien Hascoët, S. Monnot, Marc-Antoine Isorni, and Benoit Gerardin
- Subjects
medicine.medical_specialty ,Image in Intervention ,business.industry ,lcsh:R ,medicine ,lcsh:Medicine ,Radiology ,Paravalvular leak ,Cardiology and Cardiovascular Medicine ,business ,Multi modality - Published
- 2019
179. Multi-Modality Imaging Reveals Structural Centrosome Aberrations As a Potential Driver of Chromosomal Instability in Early-Stage Plasma Cell Disorders
- Author
-
Martin Schorb, Peter Dreger, Isabella Haberbosch, Sebastian Köhrer, Mandy Börmel, Tobias Dittrich, Alexander Brobeil, Marc-Steffen Raab, Yannick Schwab, Gabor Pajor, Hartmut Goldschmidt, Stefan Schönland, Niels Weinhold, Carsten Müller-Tidow, Ute Hegenbart, and Alwin Krämer
- Subjects
Immunology ,Cell Biology ,Hematology ,Plasma cell ,Biology ,Biochemistry ,Multi modality ,medicine.anatomical_structure ,Centrosome ,Chromosome instability ,Cancer research ,medicine ,Stage (cooking) ,health care economics and organizations - Abstract
Introduction Plasma cell disorders (PCD) are clonal outgrowths of pre-malignant or malignant plasma cells (PC), characterized by extensive chromosomal aberrations. Centrosome aberrations (CA) were identified to be a major driver of chromosomal instability in cancer. However, their origin, incidence, and composition in patient-derived tumor cells is only poorly understood. Moreover, while most studies on CA in primary tissues rely on immunostaining against centrosomal proteins at low resolution, systematic analysis of structural aberrations on an ultrastructural level is lacking. Here, we use a multi-modality approach integrating high-throughput volume electron microscopy (EM) and immunofluorescence (IF) imaging, expression profiling, and clinical data to enhance our understanding of CA evolution in primary cancer cells, using the PCD spectrum as a paradigm for malignant progression. Methods Consenting patients enrolled in the study were either healthy donors or diagnosed with PCD (monoclonal gammopathy of undetermined significance (MGUS), smoldering myeloma (SM), overt multiple myeloma (MM), or plasma cell leukemia (PL)) or other B-cell malignancies (B-cell acute lymphoblastic leukemia (B-ALL) and B-cell chronic lymphocytic leukemia (B-CLL)). Bone marrow aspirates (except B-ALL/B-CLL) were sorted for CD138 + PCs and prepared for both IF and electron tomography (ET) assessment. For ET, slot grids were loaded with 5 consecutive, 200 nm thick sections of epoxy resin-embedded cells. High-throughput screening by transmission EM was performed on the central section to identify cells containing centrosomes (Schorb et al., Nature Methods 2019). Serial-section ET was performed after semi-automated targeting of centrosomes on adjacent sections. Tomograms were reconstructed to produce volumes of at least 1 µm in Z for each centrosome. At least 30 completely featured centrioles were evaluated per case. An online repository of all acquired ET data will be made publicly available for interactive visualization upon conclusion of the study. For IF-based analysis, cells were fixed and stained for nuclei and the centrosomal proteins centrin and pericentrin. Results EM screening of 42,876 cells yielded 1873 completely featured centrioles in 1297 CD138 + PCs from 21 PCD patients and eight healthy donors. Both ET and IF revealed no increased frequency of supernumerary centrosomes in normal and primary patient PCs as compared to healthy cells of B-lymphatic origin. In contrast, ET revealed frequent centriole over-elongation over 500 nm, structural aberrations, and decoration of centrioles with supernumerary appendages (Fig. 1). Unexpectedly, in healthy individuals, centriole over-elongation was most pronounced and increased with age to a maximum of 75.0 % of cells. In PCD, over-elongation decreased in frequency from the early disease stages MGUS and SM, via overt MM to PL, where it was fully absent. Similarly, the amount of additional structural aberrations correlated with centriole length and decreased from healthy donors via MGUS, SM, and MM to PL, where they were absent as well. In line with these findings, gene expression profiling revealed significantly elevated mRNA levels of centriole elongation activators in healthy CD138 + PCs as compared to malignant PCs from MGUS and MM. MM patients with > 20 % over-elongated centrioles showed significantly better progression-free (p Conclusions Our data imply that centrioles lengthen with individual cellular age in healthy donor PCs. In vitro, over-elongated centrioles were shown to perturb mitotic spindle symmetry and to contribute to multipolar spindle formation previously (Marteil et al., Nature Communications 2018). Centriole over-elongation and subsequent structural CA in long-lived, quiescent PCs might hence offer a possibility for chromosomal instability development in early disease stages if these cells re-enter the cell cycle. Within increasingly advanced PCD, structural CA became less frequent, indicating an inverse relationship between centriole length and PC proliferation rate. Accordingly, in MM patients, a low rate or absence of over-elongated centrioles was associated with poor PFS and OS. Figure 1 Figure 1. Disclosures Weinhold: Sanofi: Honoraria. Goldschmidt: MSD: Research Funding; GSK: Honoraria; Incyte: Research Funding; Adaptive Biotechnology: Consultancy; Janssen: Consultancy, Honoraria, Other: Grants and/or Provision of Investigational Medicinal Product, Research Funding; BMS: Consultancy, Honoraria, Other: Grants and/or Provision of Investigational Medicinal Product, Research Funding; Celgene: Consultancy, Honoraria, Other: Grants and/or Provision of Investigational Medicinal Product, Research Funding; Chugai: Honoraria, Other: Grants and/or Provision of Investigational Medicinal Product, Research Funding; Johns Hopkins University: Other: Grant; Molecular Partners: Research Funding; Mundipharma: Research Funding; Amgen: Consultancy, Honoraria, Other: Grants and/or Provision of Investigational Medicinal Product, Research Funding; Novartis: Honoraria, Research Funding; Dietmar-Hopp-Foundation: Other: Grant; Sanofi: Consultancy, Honoraria, Other: Grants and/or Provision of Investigational Medicinal Product, Research Funding; Takeda: Consultancy, Research Funding. Müller-Tidow: Pfizer: Research Funding; Bioline: Research Funding; Janssen: Consultancy, Research Funding. Raab: Novartis: Membership on an entity's Board of Directors or advisory committees, Research Funding; Sanofi: Membership on an entity's Board of Directors or advisory committees, Research Funding; Roche: Consultancy; Celgene: Membership on an entity's Board of Directors or advisory committees; GSK: Honoraria, Membership on an entity's Board of Directors or advisory committees; Abbvie: Consultancy, Honoraria; Janssen: Membership on an entity's Board of Directors or advisory committees; BMS: Consultancy, Membership on an entity's Board of Directors or advisory committees; Amgen: Consultancy, Membership on an entity's Board of Directors or advisory committees. Dreger: BMS: Consultancy; Riemser: Consultancy, Research Funding, Speakers Bureau; AbbVie: Consultancy, Speakers Bureau; Roche: Consultancy, Speakers Bureau; Bluebird Bio: Consultancy; AstraZeneca: Consultancy, Speakers Bureau; Janssen: Consultancy; Gilead Sciences: Consultancy, Speakers Bureau; Novartis: Consultancy, Speakers Bureau. Hegenbart: Janssen: Consultancy, Research Funding; Akcea: Honoraria; Pfizer: Consultancy, Honoraria; Prothena: Research Funding; Alnylam: Honoraria. Schönland: Sanofi: Research Funding; Pfizer: Honoraria; Janssen: Honoraria, Other: Travel grants, Research Funding; Takeda: Honoraria, Other: Travel grants; Prothena: Honoraria, Other: Travel grants. Krämer: F. Hoffmann-La Roche Ltd.: Consultancy, Honoraria, Other: Honoraria to Institution, Travel/Accomodation/expenses; Bayer: Other: Honoraria to Institution, Research Funding; AbbVie: Consultancy, Honoraria; Daiichi Sankyo: Consultancy, Honoraria, Other: Travel/Accomodation/Expenses; Merck: Research Funding; Celgene: Other: Travel/Accomodation/Expenses; Bristol Myers Squibb: Consultancy.
- Published
- 2021
180. Use of Triamcinolone in keloid
- Author
-
B Shwetha and D C Sathyaki
- Subjects
medicine.medical_specialty ,Triamcinolone acetonide ,Triamcinolone Injection ,business.industry ,medicine.medical_treatment ,medicine.disease ,Tertiary care ,Multi modality ,Surgery ,Radiation therapy ,Regimen ,Keloid ,medicine ,Surgical excision ,business ,medicine.drug - Abstract
Background: Keloids are well known for recurrence. There is no standardized regimen for the treatment of keloids. Many different treatment modalities such as surgical excision, intralesional corticosteroids, radiotherapy, and pressure earrings have been used for keloids. Surgical excision alone may result in recurrence rate of 40-100%. Many different modalities of treatment have been tried to prevent recurrence. Aims and objectives of the study was to evaluate the efficacy of Triamcinolone in preventing recurrence of Keloid.Methods: 40 patients who underwent excision of keloid at a tertiary care centre. Surgery alone was performed in 20 patients and surgery with post operative intra lesional Triamcinolone injection was given weekly interval for 6 weeks in another 20 patients. Patients were followed up for the period of 2 yearsResults: Recurrence was found in 5 patients who underwent excision alone and there was no recurrence among patients who received post operative intra lesional triamcinolone.Conclusions: Multi modality treatment is better to prevent recurrence of Keloid.
- Published
- 2021
181. S2899 Importance of Early Multi-Modality Imaging in the Diagnosis of Cardiac Metastasis of Hepatocellular Carcinoma
- Author
-
Sai V. Nimmagadda, Ahmar Alam, Alexandra France, Christa L. Whitney-Miller, Harry Wang, Hanna Mieszczanska, and Eugene Storozynsky
- Subjects
medicine.medical_specialty ,Hepatology ,business.industry ,Hepatocellular carcinoma ,Gastroenterology ,medicine ,Cardiac metastasis ,Radiology ,medicine.disease ,business ,Multi modality - Published
- 2021
182. The role of local thyroid hormone perturbation in hippocampal sclerosis dementia—commentary on a multi-modality study
- Author
-
Salman Razvi and Earn H Gan
- Subjects
Gerontology ,Hippocampal sclerosis ,Population ageing ,medicine.medical_specialty ,business.industry ,Dementia with Lewy bodies ,medicine.disease ,Article ,Multi modality ,mental disorders ,medicine ,Dementia ,Surgery ,Alzheimer's disease ,business ,Psychiatry ,Vascular dementia ,Frontotemporal dementia - Abstract
We report evidence of a novel pathogenetic mechanism in which thyroid hormone dysregulation contributes to dementia in elderly persons. Two single nucleotide polymorphisms (SNPs) on chromosome 12p12 were the initial foci of our study: rs704180 and rs73069071. These SNPs were identified by separate research groups as risk alleles for non-Alzheimer’s neurodegeneration. We found that the rs73069071 risk genotype was associated with hippocampal sclerosis (HS) pathology among people with the rs704180 risk genotype (National Alzheimer’s Coordinating Center/Alzheimer’s Disease Genetic Consortium data; n=2,113, including 241 autopsy-confirmed HS cases). Further, both rs704180 and rs73069071 risk genotypes were associated with widespread brain atrophy visualized by MRI (Alzheimer’s Disease Neuroimaging Initiative data; n=1,239). In human brain samples from the Braineac database, both rs704180 and rs73069071 risk genotypes were associated with variation in expression of ABCC9, a gene which encodes a metabolic sensor protein in astrocytes. The rs73069071 risk genotype was also associated with altered expression of a nearby astrocyte-expressed gene, SLCO1C1. Analyses of human brain gene expression databases indicated that the chromosome 12p12 locus may regulate particular astrocyte-expressed genes induced by the active form of thyroid hormone, triiodothyronine (T3). This is informative biologically because the SLCO1C1 protein transports thyroid hormone into astrocytes from blood. Guided by the genomic data, we tested the hypothesis that altered thyroid hormone levels could be detected in cerebrospinal fluid (CSF) obtained from persons with HS pathology. Total T3 levels in CSF were elevated in HS cases (p
- Published
- 2017
183. Multi modality of hollow tube Gd2O3:Eu3+ nanoparticles by using nonpolar solvent
- Author
-
Sung Jun Park, Hyun Kyoung Yang, and Jin Young Park
- Subjects
Materials science ,medicine.diagnostic_test ,Mechanical Engineering ,Metals and Alloys ,Analytical chemistry ,Nanoparticle ,Phosphor ,Computed tomography ,02 engineering and technology ,010402 general chemistry ,021001 nanoscience & nanotechnology ,01 natural sciences ,Toluene ,Mr imaging ,Multi modality ,0104 chemical sciences ,Solvent ,chemistry.chemical_compound ,Nuclear magnetic resonance ,chemistry ,Mechanics of Materials ,Materials Chemistry ,medicine ,0210 nano-technology ,Luminescence - Abstract
Gd 2 O 3 :Eu 3+ is useful material in physics, chemical and biomedicine which can be applied to the magnetic resonance imaging (MRI) brightness, X-ray computed tomography (CT) as contrast agent and luminescence phenomenon as phosphor. In this paper, Gd 2 O 3 :Eu 3+ was synthesized by one-step low cost solvothermal reaction with different ratio of solvents (De-ionized water (DI)/toluene (TL)). The alteration of solvent ratios affects the morphology (rod to hollow tube shape) and size of Gd 2 O 3 :Eu 3+ which induce the property variation such as luminescence intensity, MR imaging and CT imaging brightness. The luminescence intensity of Gd 2 O 3 :Eu 3+ nanoparticles with DI 40/TL 0 is much higher than the 1.48, 1.68, 1.73 and 1.75 times by others (DI 0/TL 40, DI 10/TL 30, DI 20/TL 20 and DI 30/TL 10), MR images of Gd 2 O 3 :Eu 3+ with DI 0/TL 40 were shown to be 2.5 times brighter than that of the commercial contrast agent (Dotarem) and CT images of Gd 2 O 3 :Eu 3+ with DI 0/TL 40 was 1.68 times brighter than Dotarem. Gd 2 O 3 :Eu 3+ prepared with different solvent ratios can be selectively applied for potential applications as MRI, CT and FI multimodal imaging agent.
- Published
- 2017
184. ASO Author Reflections: Percutaneous Image-Guided Ablation and Multi-modality Management of Lung Metastases
- Author
-
Florian J. Fintelmann and Konstantin S Leppelmann
- Subjects
medicine.medical_specialty ,Lung ,Percutaneous ,business.industry ,MEDLINE ,Multi modality ,Image guided ablation ,medicine.anatomical_structure ,Text mining ,Oncology ,Surgical oncology ,Medicine ,Surgery ,Radiology ,business - Published
- 2021
185. Multi-modality echocardiographic imaging in cardiac amyloidosis
- Author
-
Mingxing Xie, Wenqian Wu, Yongxing Zhang, and Yuman Li
- Subjects
medicine.medical_specialty ,Text mining ,Cardiac amyloidosis ,business.industry ,MEDLINE ,medicine ,General Medicine ,Radiology ,business ,Multi modality - Published
- 2021
186. Narrativ kompetens
- Author
-
Stefan Lundström and Christina Olin-Scheller
- Subjects
narrative competence ,multi modality ,textual universe ,convergence culture ,collective intelligence ,fan fiction ,Language and Literature - Abstract
Narrative Competence – a Necessary Qualification in Multi Modal Text Universe? Today’s media landscape is characterized by convergence culture, where the formats and the distribution of a narrative come together and create extensive multi modal text universe. At the same time the traditional division between producer and consumer is challenged. With examples from fan fiction and role playing games, the article discusses the notion of narrative competence as a possible way of understanding and describing the participation in multi modal text universe. Some of the things that characterize narrative competence are social interplay in a collective intelligence, to be able to discern plots and make creative imitations and to develop a meta reflective ability to be able to meet, try and understand one’s own reactions. One of the conclusions in the article is that if school education wishes to be experienced as relevant among students, it should include narrative competence. At the same time it will also increase the possibilities of reaching democratic goals.
- Published
- 2010
- Full Text
- View/download PDF
187. Multi Modality Imaging Features of Cardiac Myxoma
- Author
-
Lee, Joseph C. and O'Rourke, Rachael
- Subjects
medicine.medical_specialty ,business.industry ,Cardiac Neoplasm ,Cardiac mass ,Myxoma ,Review Article ,medicine.disease ,Multimodality imaging ,Multi modality ,Natural history ,medicine.anatomical_structure ,Cardiac chamber ,cardiovascular system ,medicine ,Radiology, Nuclear Medicine and imaging ,Fossa ovalis ,Cardiac myxoma ,cardiovascular diseases ,Radiology ,Presentation (obstetrics) ,Cardiology and Cardiovascular Medicine ,business ,Letter to the Editor ,Interatrial septum - Abstract
Primary cardiac neoplasms are rare entities of which approximately 75% are benign and the remaining 25% malignant. Myxomas are the most common benign primary cardiac tumor (30%) and most commonly arise in the left atrium from the interatrial septum at the fossa ovalis. However, they also can originate in any cardiac chamber. Clinical presentation and patient symptomatology are determined by size, location, and mobility of the myxoma. This review will discuss the clinical presentation, natural history, pathology, and multimodality imaging features of cardiac myxomas.
- Published
- 2021
188. A novel few-shot learning based multi-modality fusion model for COVID-19 rumor detection from online social media
- Author
-
Chenyou Fan, Wei Fang, Heng-Yang Lu, and Xiaoning Song
- Subjects
General Computer Science ,Coronavirus disease 2019 (COVID-19) ,Computer science ,Microblogging ,Few-shot learning ,Data Mining and Machine Learning ,Network Science and Online Social Networks ,Machine learning ,computer.software_genre ,Multi modality ,Social media ,Artificial Intelligence ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Learning based ,Rumor detection ,Multi-modality ,Generality ,Event (computing) ,business.industry ,COVID-19 ,QA75.5-76.95 ,Rumor ,Natural Language and Speech ,Computational Linguistics ,Electronic computers. Computer science ,Artificial intelligence ,business ,computer - Abstract
BackgroundRumor detection is a popular research topic in natural language processing and data mining. Since the outbreak of COVID-19, related rumors have been widely posted and spread on online social media, which have seriously affected people’s daily lives, national economy, social stability, etc. It is both theoretically and practically essential to detect and refute COVID-19 rumors fast and effectively. As COVID-19 was an emergent event that was outbreaking drastically, the related rumor instances were very scarce and distinct at its early stage. This makes the detection task a typical few-shot learning problem. However, traditional rumor detection techniques focused on detecting existed events with enough training instances, so that they fail to detect emergent events such as COVID-19. Therefore, developing a new few-shot rumor detection framework has become critical and emergent to prevent outbreaking rumors at early stages.MethodsThis article focuses on few-shot rumor detection, especially for detecting COVID-19 rumors from Sina Weibo with only a minimal number of labeled instances. We contribute a Sina Weibo COVID-19 rumor dataset for few-shot rumor detection and propose a few-shot learning-based multi-modality fusion model for few-shot rumor detection. A full microblog consists of the source post and corresponding comments, which are considered as two modalities and fused with the meta-learning methods.ResultsExperiments of few-shot rumor detection on the collected Weibo dataset and the PHEME public dataset have shown significant improvement and generality of the proposed model.
- Published
- 2021
189. International journal of cardiology congenital heart disease the ACHD multi-modality imaging series: Imaging of atrial septal defects in adulthood
- Author
-
Thomas Semple, Sonya V. Babu-Narayan, Siew Yen Ho, Wei Li, and Elena Surkova
- Subjects
medicine.medical_specialty ,Cardiac computed tomography ,Modality (human–computer interaction) ,Heart disease ,business.industry ,Expert consensus ,Diagnostic strategy ,medicine.disease ,Multimodality imaging ,Atrial septal defects ,Multi modality ,Imaging modalities ,Echocardiography ,RC666-701 ,Internal medicine ,Atrial septal defect ,medicine ,Cardiology ,Diseases of the circulatory (Cardiovascular) system ,Cardiovascular magnetic resonance ,business ,Strengths and weaknesses - Abstract
Multimodality imaging in cardiology, and particularly congenital heart disease, has evolved into a critical tool, essential for clinical decision-making and management. Understanding the strengths and weaknesses of each imaging modality allows for timely and accurate diagnosis, enables their complementary use in answering specific clinical questions, facilitates management and prevents overuse of cardiovascular imaging ensuring the most appropriate and cost-effective diagnostic strategy for each patient. We provide herewith an expert consensus on the role of different cardiovascular imaging modalities in the assessment of the atrial septal defect and its haemodynamic consequences in adults, highlighting each modality's strengths and weaknesses, and clarifying how they are best applied in various clinical settings.
- Published
- 2021
190. A deep learning system for automated, multi-modality 2D segmentation of vertebral bodies and intervertebral discs
- Author
-
Chamith S. Rajapakse, Iman Fathali, Abhinav Suri, Albi Domi, Ashley Terry, Helene Chesnais, Brandon C. Jones, Nikita Bastin, Patrick Beyrer, Thomas Leichner, Nancy Anabaraonye, Grace Choi, Grace Ng, and Sisi Tang
- Subjects
0301 basic medicine ,medicine.medical_specialty ,Vertebral Body ,Histology ,Physiology ,Computer science ,Endocrinology, Diabetes and Metabolism ,030209 endocrinology & metabolism ,Article ,Multi modality ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Radiomics ,medicine ,Humans ,Segmentation ,Clinical imaging ,Entropy (energy dispersal) ,Intervertebral Disc ,Pixel ,business.industry ,Deep learning ,Network on ,Magnetic Resonance Imaging ,030104 developmental biology ,Neural Networks, Computer ,Radiology ,Artificial intelligence ,Tomography, X-Ray Computed ,business - Abstract
PURPOSE: Fractures in vertebral bodies are among the most common complications of osteoporosis and other bone diseases. However, studies that aim to predict future fractures and assess general spine health must manually delineate vertebral bodies and intervertebral discs in imaging studies for further radiomic analysis. This study aims to develop a deep learning system that can automatically and rapidly segment (delineate) vertebrae and discs in MR, CT, and X-ray imaging studies. RESULTS: We constructed a neural network to output 2D segmentations for MR, CT, and X-ray imaging studies. We trained the network on 4490 MR, 550 CT, and 1935 X-ray imaging studies (post-data augmentation) spanning a wide variety of patient populations, bone disease statuses, and ages from 2005–2020. Evaluated using 5-fold cross validation, the network was able to produce median Dice scores > 0.95 across all modalities for vertebral bodies and intervertebral discs (on the most central slice for MR/CT and on image for X-ray). Furthermore, radiomic features (skewness, kurtosis, mean of positive value pixels, and entropy) calculated from predicted segmentation masks were highly accurate (r ≥ 0.96 across all radiomic features when compared to ground truth). Mean time to produce outputs was < 1.7 seconds across all modalities. CONCLUSIONS: Our network was able to rapidly produce segmentations for vertebral bodies and intervertebral discs for MR, CT, and X-ray imaging studies. Furthermore, radiomic quantities derived from these segmentations were highly accurate. Since this network produced outputs rapidly for these modalities which are commonly used, it can be put to immediate use for radiomic and clinical imaging studies assessing spine health.
- Published
- 2021
191. SP-0665 Multi-modality treatment of soft tissue sarcoma
- Author
-
A. Levy and C. Le Pechoux
- Subjects
medicine.medical_specialty ,Oncology ,business.industry ,Soft tissue sarcoma ,Medicine ,Radiology, Nuclear Medicine and imaging ,Hematology ,Radiology ,business ,medicine.disease ,Multi modality - Published
- 2021
192. Near‐Infrared Light‐Triggered Polyprodrug/siRNA Loaded Upconversion Nanoparticles for Multi‐Modality Imaging and Synergistic Cancer Therapy
- Author
-
Qingfei Zhang, Hongtong Lu, Shasha He, Yubin Huang, Jie Yu, Gaizhen Kuang, and Hejian Xiong
- Subjects
Combination therapy ,Infrared Rays ,Chemistry ,Genetic enhancement ,Biomedical Engineering ,Cancer therapy ,Pharmaceutical Science ,Gene delivery ,Multimodal Imaging ,Multi modality ,Biomaterials ,Upconversion nanoparticles ,Drug Delivery Systems ,In vivo ,Neoplasms ,Drug delivery ,Nanoparticles ,RNA, Small Interfering ,Biomedical engineering - Abstract
Stimuli-responsive nanosystems have been widely applied as effective modalities for drug/gene co-delivery in cancer treatment. However, precise spatiotemporal manipulations of drug/gene co-delivery, as well as multi-modality imaging-guided cancer therapy, still remain a daunting challenge. Here, multifunctional polyprodrug/siRNA loaded upconversion nanoparticles (UCNPs) are reported that combine computed tomography (CT), magnetic resonance (MR), and upconversion luminescence (UCL) tri-modality imaging and near-infrared (NIR) light-activated drug/gene on-demand delivery. The photoactivatable platinum(IV) (Pt(IV))-backbone polymers (PPt) and the siRNA targeting polo-like kinase 1 (Plk1) are loaded on the surface of polyethyleneimine (PEI)-coated UCNPs (PUCNP) to obtain the multifunctional polyprodrug/siRNA loaded UCNPs (PUCNP@Pt@siPlk1). The PUCNP@Pt@siPlk1 can be served as a "nanotransducer" to convert NIR light (980 nm) into local ultraviolet (UV) to visible light for the cleavage of photosensitive PPt, resulting in the simultaneous on-demand release of high toxic platinum(II) (Pt(II)) and siPlk1. Meanwhile, the PUCNP@Pt@siPlk1 has CT, T1 -weighted MR, and UCL tri-modality imaging abilities. Based on these merits, PUCNP@Pt@siPlk1 displayed excellent synergistic therapeutic efficacy via image-guided and NIR light-activated platinum-based chemotherapy and RNA interfering in vitro and in vivo. Thus, this developed nanosystem with NIR light-controlled drug/gene delivery and multi-modality imaging abilities, will display great potential in combining chemotherapy and gene therapy.
- Published
- 2021
193. One-for-all phototheranostics: Single component AIE dots as multi-modality theranostic agent for fluorescence-photoacoustic imaging-guided synergistic cancer therapy
- Author
-
Haifei Wen, Dong Wang, Yonghong Tan, Heng Guo, Wenhan Xu, Lei Xi, Ziyao Wen, Haoxuan Li, Kai Li, Jiachang Huang, Ben Zhong Tang, Qian Wu, Miaomiao Kang, Youmei Li, Lei Wang, and Zhijun Zhang
- Subjects
Materials science ,Biocompatibility ,Biophysics ,Cancer therapy ,Photoacoustic imaging in biomedicine ,Bioengineering ,Nanotechnology ,02 engineering and technology ,Theranostic Nanomedicine ,Photothermal conversion ,Multi modality ,Photoacoustic Techniques ,Biomaterials ,Mice ,03 medical and health sciences ,Neoplasms ,Animals ,Precision Medicine ,030304 developmental biology ,0303 health sciences ,Single component ,Photothermal therapy ,021001 nanoscience & nanotechnology ,Fluorescence ,Photochemotherapy ,Mechanics of Materials ,Ceramics and Composites ,Nanoparticles ,0210 nano-technology - Abstract
Construction of single component theranostic agent with one-for-all features to concurrently afford both multi-modality imaging and therapy is an appealing yet significantly challenging task. Herein, a type of luminogens with aggregation-induced emission (AIE) characteristics are tactfully designed and facilely synthesized. These AIE luminogens (AIEgens) exhibit long emission wavelengths, good photostability, remarkable biocompatibility, good reactive oxygen species (ROS) generation performance and excellent photothermal conversion efficiency, which allow them to be powerfully utilized for in vitro and in vivo cancer phototheranostics. The results show that one of the AIEgens is capable of precisely diagnosing solid tumors of mice by means of combined near-infrared-I/II (NIR-I/II) fluorescence-photoacoustic imaging, meanwhile this AIEgen can activate photodynamic and photothermal synergistic therapy (PDT-PTT) upon laser irradiation, resulting in excellent tumor elimination efficacy with only once injection and irradiation. This study thus provides a versatile platform for practical cancer theranostics.
- Published
- 2021
194. Abstract 2812: A multi-modality robotic ultrasound and bioluminescence system provides a low-cost alternative to magnetic resonance imaging for measurement of orthotopic pancreatic tumors
- Author
-
Paul A. Dayton, Tomek J. Czernuszewicz, Yuliya Pylayeva-Gupta, Alexandra De Lille, Brian Velasco, Juan D. Rojas, and Jordan B. Joiner
- Subjects
Cancer Research ,Oncology ,medicine.diagnostic_test ,business.industry ,Ultrasound ,Medicine ,Bioluminescence ,Magnetic resonance imaging ,business ,Multi modality ,Biomedical engineering - Abstract
Solid tumor models are widely used to evaluate efficacy of novel therapeutics. Calipers, a rapid but error-prone technique that is often used to measure volume in subcutaneous models, cannot measure orthotopic tumors, which have been proven to better represent clinical disease. Bioluminescence Imaging (BLI) on the other hand, is a well-established and non-invasive technique for tumor development. Yet several of factors should be carefully considered when interpreting BLI data (signal depth attenuation, hypoxia, necrosis, D-luciferin kinetics, luciferase expression, immune response, etc.). As a result, researchers often include an anatomical modality such as magnetic resonance (MR) or ultrasound (US). MR imaging is the gold standard but suffers from long and expensive acquisitions, while conventional US is fast but requires skilled users and lacks repeatability. Here, we compare BLI tumor cell viability measurements with volume measurements from robotic US and MRI and highlight the value of tracking both anatomical and molecular readouts for therapeutic efficacy assessment. Pancreatic tumor cells (KPC 4662) were injected into the pancreas tail of 10 C57BL/6 female mice and imaged weekly with BLI, US, and MRI starting 7 days after injection. Robotic US and BLI images were acquired with a Strata (SonoVol, Inc.) system, and MRI images with the 9.4 Tesla BioSpec MR scanner (Bruker Biospin). US and MR images were digitally segmented for volume and BLI signal was quantified using SonoEQ (SonoVol, Inc.). Tumor volumes obtained via robotic US and MRI had a strong correlation (R2 = 0.95). Bland-Altman analysis showed an insignificant mean bias of 12 mm3 (p = 0.28). The limits of agreement and coefficient of variation were 130 mm3 and 26%, respectively. BLI signal was detected on the first timepoint (7d), while tumors were not resolvable anatomically until 9d. BLI signal increased rapidly for the first 2 timepoints but remained stable after that. In comparison, tumor volume increased throughout the study and so, there was a poor correlation between molecular BLI and volumetric anatomical US measurements (R2 = 0.69). This work demonstrates the importance of multimodality imaging of tumor growth. BLI signal plateaus because dead tumor cells at the core do not produce light so while BLI is a valuable and highly sensitive tool for cell viability, initial growth, and drug efficacy, limitations mentioned above may bias assessment of response to therapy. Hence, a combined BLI signal and volume readout in orthotopic models may offer a more complete picture of tumor development and treatment efficacy. Robotic US provides similar measurements of orthotopic tumors to MRI while allowing faster scan times at a reduced cost. Therefore, robotic ultrasound with integrated BLI allows for a more holistic assessment of tumor development and response to therapy. Citation Format: Juan Rojas, Jordan Joiner, Brian Velasco, Alexandra De Lille, Tomek J. Czernuszewicz, Yuliya Pylayeva-Gupta, Paul A. Dayton. A multi-modality robotic ultrasound and bioluminescence system provides a low-cost alternative to magnetic resonance imaging for measurement of orthotopic pancreatic tumors [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2021; 2021 Apr 10-15 and May 17-21. Philadelphia (PA): AACR; Cancer Res 2021;81(13_Suppl):Abstract nr 2812.
- Published
- 2021
195. PH-0603: Deep learning delineation of GTV for head and neck cancer with multi-modality imaging
- Author
-
S.S. Korremann, J. Ren, J.G. Eriksen, and J. Nijkamp
- Subjects
medicine.medical_specialty ,Oncology ,business.industry ,Deep learning ,Head and neck cancer ,medicine ,Radiology, Nuclear Medicine and imaging ,Hematology ,Radiology ,Artificial intelligence ,medicine.disease ,business ,Multi modality - Published
- 2020
196. Multi-modality Functional Aortic Valve Phantom for Haemodynamic Assessment
- Author
-
Pablo Lamata, Bernard Prendergast, Simon Redwood, Nili Shah, Ronak Rajani, Harminder Gill, and Joao Filipe Fernandes
- Subjects
Aortic valve ,medicine.medical_specialty ,business.industry ,Hemodynamics ,medicine.disease ,Imaging phantom ,Multi modality ,Rendering (computer graphics) ,Stenosis ,medicine.anatomical_structure ,Medicine ,Radiology ,Cardiology and Cardiovascular Medicine ,business - Abstract
Objective: Aortic stenosis (AS) is a prevalent valve condition with poor outcomes when left untreated. AS severity metrics are discordant in 30% of cases rendering clinical decision-making complex....
- Published
- 2021
197. MULTI-MODALITY IMAGING IN CARDIAC AMYLOIDOSIS
- Author
-
Rajesh Sachdeva, Aneesha Thobani, and Gautam Kumar
- Subjects
medicine.medical_specialty ,Cardiac amyloidosis ,business.industry ,medicine ,Radiology ,Cardiology and Cardiovascular Medicine ,business ,Multi modality - Published
- 2021
198. MULTI-MODALITY IMAGING FOR THE DIAGNOSIS OF BAFFLE STENOSIS
- Author
-
M. Beth Brickner, Michael Luna, Kristen Wong, and Spencer Carter
- Subjects
medicine.medical_specialty ,Stenosis ,business.industry ,medicine ,Baffle ,Radiology ,Cardiology and Cardiovascular Medicine ,business ,medicine.disease ,Multi modality - Published
- 2021
199. TOM,DICK,OR HARRY? ROLE OF MULTI-MODALITY IMAGING IN A PATIENT WITH ATYPICAL PRESENTATION OF A VERY LARGE LEFT VENTRICULAR PSEUDOANEURYSM
- Author
-
Muhammad Anwar and Prabhakaran Gopalakrishnan
- Subjects
medicine.medical_specialty ,business.industry ,Left ventricular pseudoaneurysm ,Medicine ,Radiology ,Presentation (obstetrics) ,Cardiology and Cardiovascular Medicine ,business ,Multi modality - Published
- 2021
200. ROLE OF MULTI-MODALITY IMAGING IN RISK STRATIFYING MITRAL VALVE PROLAPSE
- Author
-
Osama Tariq Niazi, Duane Heinrichs, Issa Ismail, Darryl Stein, Kelly Guld, and Stephen Hu
- Subjects
medicine.medical_specialty ,business.industry ,medicine ,Mitral valve prolapse ,Radiology ,Cardiology and Cardiovascular Medicine ,medicine.disease ,business ,Multi modality - Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.