7 results on '"Swaroop, Vedula"'
Search Results
2. Effect of real-time virtual reality-based teaching cues on learning needle passing for robot-assisted minimally invasive surgery: a randomized controlled trial
- Author
-
S. Swaroop Vedula, Gregory D. Hager, Henry C. Lin, Russell H. Taylor, and Anand Malpani
- Subjects
Computer science ,business.industry ,education ,Biomedical Engineering ,Health Informatics ,General Medicine ,Overlay ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Coaching ,Computer Science Applications ,Task (project management) ,law.invention ,Dreyfus model of skill acquisition ,Randomized controlled trial ,Human–computer interaction ,law ,Robot ,Radiology, Nuclear Medicine and imaging ,Surgery ,Computer Vision and Pattern Recognition ,Metric (unit) ,business - Abstract
Current virtual reality-based (VR) simulators for robot-assisted minimally invasive surgery (RAMIS) training lack effective teaching and coaching. Our objective was to develop an automated teaching framework for VR training in RAMIS. Second, we wanted to study the effect of such real-time teaching cues on surgical technical skill acquisition. Third, we wanted to assess skill in terms of surgical technique in addition to traditional time and motion efficiency metrics. We implemented six teaching cues within a needle passing task on the da Vinci Skills Simulator platform (noncommercial research version). These teaching cues are graphical overlays designed to demonstrate ideal surgical technique, e.g., what path to follow while passing needle through tissue. We created three coaching modes: teach (continuous demonstration), metrics (demonstration triggered by performance metrics), and user (demonstration upon user request). We conducted a randomized controlled trial where the experimental group practiced using automated teaching and the control group practiced in a self-learning manner without automated teaching. We analyzed data from 30 participants (14 in experimental and 16 in control group). After three practice repetitions, control group showed higher improvement in time and motion efficiency, while experimental group showed higher improvement in surgical technique compared to their baseline measurements. The experimental group showed more improvement than the control group on a surgical technique metric (at what angle is needle grasped by an instrument), and the difference between groups was statistically significant. In a pilot randomized controlled trial, we observed that automated teaching cues can improve the performance of surgical technique in a VR simulator for RAMIS needle passing. Our study was limited by its recruitment of nonsurgeons and evaluation of a single configuration of coaching modes.
- Published
- 2020
- Full Text
- View/download PDF
3. Objective assessment of intraoperative technical skill in capsulorhexis using videos of cataract surgery
- Author
-
Sidra Zafar, Tae Soo Kim, S. Swaroop Vedula, Shameema Sikder, Gregory D. Hager, and Molly O'Brien
- Subjects
medicine.medical_specialty ,Computer science ,medicine.medical_treatment ,0206 medical engineering ,Biomedical Engineering ,Optical flow ,Health Informatics ,Cataract Extraction ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,Likert scale ,03 medical and health sciences ,0302 clinical medicine ,Rating scale ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Capsulorhexis ,business.industry ,Deep learning ,Rubric ,General Medicine ,Cataract surgery ,020601 biomedical engineering ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Data set ,Ophthalmology ,Surgery ,Clinical Competence ,Educational Measurement ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business - Abstract
Objective assessment of intraoperative technical skill is necessary for technology to improve patient care through surgical training. Our objective in this study was to develop and validate deep learning techniques for technical skill assessment using videos of the surgical field. We used a data set of 99 videos of capsulorhexis, a critical step in cataract surgery. One expert surgeon annotated each video for technical skill using a standard structured rating scale, the International Council of Ophthalmology’s Ophthalmology Surgical Competency Assessment Rubric:phacoemulsification (ICO-OSCAR:phaco). Using two capsulorhexis indices in this scale (commencement of flap and follow-through, formation and completion), we specified an expert performance when at least one of the indices was 5 and the other index was at least 4, and novice otherwise. In addition, we used scores for capsulorhexis commencement and capsulorhexis formation as separate ground truths (Likert scale of 2 to 5; analyzed as 2/3, 4 and 5). We crowdsourced annotations of instrument tips. We separately modeled instrument trajectories and optical flow using temporal convolutional neural networks to predict a skill class (expert/novice) and score on each item for capsulorhexis in ICO-OSCAR:phaco. We evaluated the algorithms in a five-fold cross-validation and computed accuracy and area under the receiver operating characteristics curve (AUC). The accuracy and AUC were 0.848 and 0.863 for instrument tip velocities, and 0.634 and 0.803 for optical flow fields, respectively. Deep neural networks effectively model surgical technical skill in capsulorhexis given structured representation of intraoperative data such as optical flow fields extracted from video or crowdsourced tool localization information.
- Published
- 2019
- Full Text
- View/download PDF
4. Query-by-example surgical activity detection
- Author
-
Gregory D. Hager, Sanjeev Khudanpur, Mija R. Lee, Gyusung Lee, Yixin Gao, and S. Swaroop Vedula
- Subjects
Dynamic time warping ,Jaccard index ,Computer science ,0206 medical engineering ,Biomedical Engineering ,Information Storage and Retrieval ,Health Informatics ,02 engineering and technology ,03 medical and health sciences ,0302 clinical medicine ,Subsequence ,Humans ,Radiology, Nuclear Medicine and imaging ,Query by Example ,computer.programming_language ,business.industry ,Template matching ,Pattern recognition ,General Medicine ,020601 biomedical engineering ,Computer Graphics and Computer-Aided Design ,Thresholding ,Substring ,Computer Science Applications ,Surgical Procedures, Operative ,Surgery ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Feature learning ,computer ,Algorithms ,030217 neurology & neurosurgery - Abstract
Easy acquisition of surgical data opens many opportunities to automate skill evaluation and teaching. Current technology to search tool motion data for surgical activity segments of interest is limited by the need for manual pre-processing, which can be prohibitive at scale. We developed a content-based information retrieval method, query-by-example (QBE), to automatically detect activity segments within surgical data recordings of long duration that match a query. The example segment of interest (query) and the surgical data recording (target trial) are time series of kinematics. Our approach includes an unsupervised feature learning module using a stacked denoising autoencoder (SDAE), two scoring modules based on asymmetric subsequence dynamic time warping (AS-DTW) and template matching, respectively, and a detection module. A distance matrix of the query against the trial is computed using the SDAE features, followed by AS-DTW combined with template scoring, to generate a ranked list of candidate subsequences (substrings). To evaluate the quality of the ranked list against the ground-truth, thresholding conventional DTW distances and bipartite matching are applied. We computed the recall, precision, F1-score, and a Jaccard index-based score on three experimental setups. We evaluated our QBE method using a suture throw maneuver as the query, on two tool motion datasets (JIGSAWS and MISTIC-SL) captured in a training laboratory. We observed a recall of 93, 90 and 87 % and a precision of 93, 91, and 88 % with same surgeon same trial (SSST), same surgeon different trial (SSDT) and different surgeon (DS) experiment setups on JIGSAWS, and a recall of 87, 81 and 75 % and a precision of 72, 61, and 53 % with SSST, SSDT and DS experiment setups on MISTIC-SL, respectively. We developed a novel, content-based information retrieval method to automatically detect multiple instances of an activity within long surgical recordings. Our method demonstrated adequate recall across different complexity datasets and experimental conditions.
- Published
- 2016
- Full Text
- View/download PDF
5. A study of crowdsourced segment-level surgical skill assessment using pairwise rankings
- Author
-
Gregory D. Hager, Anand Malpani, Chi Chiung Grace Chen, and S. Swaroop Vedula
- Subjects
Computer science ,Biomedical Engineering ,Health Informatics ,Machine learning ,computer.software_genre ,Crowdsourcing ,Task (project management) ,Percentile rank ,Robotic Surgical Procedures ,Margin (machine learning) ,Classifier (linguistics) ,Humans ,Radiology, Nuclear Medicine and imaging ,Simulation ,Observer Variation ,Models, Statistical ,business.industry ,Reproducibility of Results ,General Medicine ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Global Rating ,Binary classification ,General Surgery ,Surgery ,Pairwise comparison ,Clinical Competence ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithms - Abstract
Currently available methods for surgical skills assessment are either subjective or only provide global evaluations for the overall task. Such global evaluations do not inform trainees about where in the task they need to perform better. In this study, we investigated the reliability and validity of a framework to generate objective skill assessments for segments within a task, and compared assessments from our framework using crowdsourced segment ratings from surgically untrained individuals and expert surgeons against manually assigned global rating scores. Our framework includes (1) a binary classifier trained to generate preferences for pairs of task segments (i.e., given a pair of segments, specification of which one was performed better), (2) computing segment-level percentile scores based on the preferences, and (3) predicting task-level scores using the segment-level scores. We conducted a crowdsourcing user study to obtain manual preferences for segments within a suturing and knot-tying task from a crowd of surgically untrained individuals and a group of experts. We analyzed the inter-rater reliability of preferences obtained from the crowd and experts, and investigated the validity of task-level scores obtained using our framework. In addition, we compared accuracy of the crowd and expert preference classifiers, as well as the segment- and task-level scores obtained from the classifiers. We observed moderate inter-rater reliability within the crowd (Fleiss’ kappa, $$\kappa = 0.41$$ ) and experts ( $$\kappa = 0.55$$ ). For both the crowd and experts, the accuracy of an automated classifier trained using all the task segments was above par as compared to the inter-rater agreement [crowd classifier 85 % (SE 2 %), expert classifier 89 % (SE 3 %)]. We predicted the overall global rating scores (GRS) for the task with a root-mean-squared error that was lower than one standard deviation of the ground-truth GRS. We observed a high correlation between segment-level scores ( $$\rho \ge 0.86$$ ) obtained using the crowd and expert preference classifiers. The task-level scores obtained using the crowd and expert preference classifier were also highly correlated with each other ( $$\rho \ge 0.84$$ ), and statistically equivalent within a margin of two points (for a score ranging from 6 to 30). Our analyses, however, did not demonstrate statistical significance in equivalence of accuracy between the crowd and expert classifiers within a 10 % margin. Our framework implemented using crowdsourced pairwise comparisons leads to valid objective surgical skill assessment for segments within a task, and for the task overall. Crowdsourcing yields reliable pairwise comparisons of skill for segments within a task with high efficiency. Our framework may be deployed within surgical training programs for objective, automated, and standardized evaluation of technical skills.
- Published
- 2015
- Full Text
- View/download PDF
6. Integrating multiple data sources (MUDS) for meta-analysis to improve patient-centered outcomes research: a protocol for a systematic review
- Author
-
Sonal Singh, Catalina Suarez-Cuervo, Elizabeth Tolbert, Claire Twose, Diana Lock, Peter Doshi, Gillian Gresham, Tianjing Li, James Heyward, Theresa Cowley, Swaroop Vedula, Jeffrey T. Ehmsen, Jennifer L. Payne, Susan Hutfless, Lori Rosman, Evan Mayo-Wilson, Nicole Fusco, Elizabeth A. Stuart, Hwanhee Hong, Kay Dickersin, and Jennifer A. Haythornthwaite
- Subjects
Protocol (science) ,medicine.medical_specialty ,business.industry ,Patient-centered outcomes ,Medicine (miscellaneous) ,Publication bias ,Clinical trial ,Systematic review ,Reporting bias ,Meta-analysis ,medicine ,Medical physics ,Psychiatry ,business ,Meta-Analysis as Topic - Abstract
Background Systematic reviews should provide trustworthy guidance to decision-makers, but their credibility is challenged by the selective reporting of trial results and outcomes. Some trials are not published, and even among clinical trials that are published partially (e.g., as conference abstracts), many are never published in full. Although there are many potential sources of published and unpublished data for systematic reviews, there are no established methods for choosing among multiple reports or data sources about the same trial.
- Published
- 2015
- Full Text
- View/download PDF
7. Implementation of a publication strategy in the context of reporting biases. A case study based on new documents from Neurontin® litigation
- Author
-
S. Swaroop Vedula, Palko S Goldman, Thomas M Greene, Kay Dickersin, and Ilyas J Rona
- Subjects
medicine.medical_specialty ,Bipolar Disorder ,Cyclohexanecarboxylic Acids ,Drug Industry ,Gabapentin ,Migraine Disorders ,Applied psychology ,MEDLINE ,Medicine (miscellaneous) ,Context (language use) ,Documentation ,Drug Costs ,Electronic mail ,Nociceptive Pain ,Access to Information ,03 medical and health sciences ,0302 clinical medicine ,Antimanic Agents ,medicine ,Humans ,Pharmacology (medical) ,030212 general & internal medicine ,Amines ,Psychiatry ,gamma-Aminobutyric Acid ,Marketing of Health Services ,lcsh:R5-920 ,Analgesics ,Clinical Trials as Topic ,Evidence-Based Medicine ,Electronic Mail ,Conflict of Interest ,Research ,Fraud ,Conflict of interest ,Off-Label Use ,Evidence-based medicine ,Publication bias ,Correspondence as Topic ,Authorship ,Neuralgia ,Periodicals as Topic ,lcsh:Medicine (General) ,Psychology ,Publication Bias ,030217 neurology & neurosurgery ,medicine.drug - Abstract
Background Previous studies have documented strategies to promote off-label use of drugs using journal publications and other means. Few studies have presented internal company communications that discussed financial reasons for manipulating the scholarly record related to off-label indications. The objective of this study was to build on previous studies to illustrate implementation of a publication strategy by the drug manufacturer for four off-label uses of gabapentin (Neurontin®, Pfizer, Inc.): migraine prophylaxis, treatment of bipolar disorders, neuropathic pain, and nociceptive pain. Methods We included in this study internal company documents, email correspondence, memoranda, study protocols and reports that were made publicly available in 2008 as part of litigation brought by consumers and health insurers against Pfizer for fraudulent sales practices in its marketing of gabapentin (see http://pacer.mad.uscourts.gov/dc/cgi-bin/recentops.pl?filename=saris/pdf/ucl%20opinion.pdf for the Court’s findings). We reviewed documents pertaining to 20 clinical trials, 12 of which were published. We categorized our observations related to reporting biases and linked them with topics covered in internal documents, that is, deciding what should and should not be published and how to spin the study findings (re-framing study results to explain away unfavorable findings or to emphasize favorable findings); and where and when findings should be published and by whom. Results We present extracts from internal company marketing assessments recommending that Pfizer and Parke-Davis (Pfizer acquired Parke-Davis in 2000) adopt a publication strategy to conduct trials and disseminate trial findings for unapproved uses rather than an indication strategy to obtain regulatory approval. We show internal company email correspondence and documents revealing how publication content was influenced and spin was applied; how the company selected where trial findings would be presented or published; how publication of study results was delayed; and the role of ghost authorship. Conclusions Taken together, the extracts we present from internal company documents illustrate implementation of a strategy at odds with unbiased study conduct and dissemination. Our findings suggest that Pfizer and Parke-Davis’s publication strategy had the potential to distort the scientific literature, and thus misinform healthcare decision-makers.
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.