61 results on '"Bergeles C"'
Search Results
2. Surgical Robotics: Towards Measurable Patient Benefits and Widespread Adoption
- Author
-
Bergeles, C, Cruz Ruiz, A, Rodriguez y Baena, F, and Valdastri, P
- Published
- 2021
3. Single-camera focus-based localization of intraocular devices
- Author
-
Bergeles, C., Shamaei, K., Abbott, J. J., and Nelson, B. J.
- Subjects
Intraoperative radiotherapy -- Usage ,Eye -- Medical examination ,Biological sciences ,Business ,Computers ,Health care industry - Published
- 2010
4. Intravitreale magnetisch steuerbare Mikroroboter
- Author
-
Framme, Carsten, Bergeles, C., Ergeneman, O., Kratochvil, B., Kummer, M., Pane, S., Pocepcova, V., and Nelson, B. J.
- Subjects
ddc: 610 ,610 Medical sciences ,Medicine - Abstract
Fragestellung: Evaluation von magnetisch steuerbaren intravitrealen Minirobotern für minimal invasive operative Prozeduren im Glaskörperraum. Methodik: Ein kabelloses magnetisches Kontrollsystem wurde zur Steuerung von komplett ungebundenen Mikrorobotern im Glaskörperraum entwickelt.[for full text, please go to the a.m. URL], 64. Tagung der Vereinigung Norddeutscher Augenärzte
- Published
- 2014
5. Intravitreale magnetisch steuerbare Mikroroboter
- Author
-
Framme, C, Bergeles, C, Ergeneman, O, Kratochvil, B, Kummer, M, Pane, S, Pocepcova, V, Nelson, BJ, Framme, C, Bergeles, C, Ergeneman, O, Kratochvil, B, Kummer, M, Pane, S, Pocepcova, V, and Nelson, BJ
- Published
- 2014
6. Localized viscoelasticity measurements with untethered intravitreal microrobots
- Author
-
Pokki, J., primary, Ergeneman, O., additional, Bergeles, C., additional, Torun, H., additional, and Nelson, B. J., additional
- Published
- 2012
- Full Text
- View/download PDF
7. Tracking intraocular microdevices based on colorspace evaluation and statistical color/shape information
- Author
-
Bergeles, C., primary, Fagogenis, G., additional, Abbott, J.J., additional, and Nelson, B.J., additional
- Published
- 2009
- Full Text
- View/download PDF
8. Model-based localization of intraocular microrobots for wireless electromagnetic control.
- Author
-
Bergeles, C., Kratochvil, B.E., and Nelson, B.J.
- Published
- 2011
- Full Text
- View/download PDF
9. Wide-angle localization of intraocular devices from focus.
- Author
-
Bergeles, C., Shamaei, K., Abbott, J.J., and Nelson, B.J.
- Published
- 2009
- Full Text
- View/download PDF
10. On imaging and localizing untethered intraocular devices with a stationary camera.
- Author
-
Bergeles, C., Shamaei, K., Abbott, J.J., and Nelson, B.J.
- Published
- 2008
- Full Text
- View/download PDF
11. Erratum: Unpaired intra-operative OCT (iOCT) video super-resolution with contrastive learning: erratum.
- Author
-
Komninos C, Pissas T, Flores B, Bloch E, Vercauteren T, Ourselin S, Da Cruz L, and Bergeles C
- Abstract
[This corrects the article on p. 772 in vol. 15, PMID: 38404298.]., Competing Interests: The authors declare no conflicts of interest., (© 2025 The Author(s).)
- Published
- 2025
- Full Text
- View/download PDF
12. Predicting 1, 2 and 3 year emergent referable diabetic retinopathy and maculopathy using deep learning.
- Author
-
Nderitu P, Nunez do Rio JM, Webster L, Mann S, Cardoso MJ, Modat M, Hopkins D, Bergeles C, and Jackson TL
- Abstract
Background: Predicting diabetic retinopathy (DR) progression could enable individualised screening with prompt referral for high-risk individuals for sight-saving treatment, whilst reducing screening burden for low-risk individuals. We developed and validated deep learning systems (DLS) that predict 1, 2 and 3 year emergent referable DR and maculopathy using risk factor characteristics (tabular DLS), colour fundal photographs (image DLS) or both (multimodal DLS)., Methods: From 162,339 development-set eyes from south-east London (UK) diabetic eye screening programme (DESP), 110,837 had eligible longitudinal data, with the remaining 51,502 used for pretraining. Internal and external (Birmingham DESP, UK) test datasets included 27,996, and 6928 eyes respectively., Results: Internal multimodal DLS emergent referable DR, maculopathy or either area-under-the receiver operating characteristic (AUROC) were 0.95 (95% CI: 0.92-0.98), 0.84 (0.82-0.86), 0.85 (0.83-0.87) for 1 year, 0.92 (0.87-0.96), 0.84 (0.82-0.87), 0.85 (0.82-0.87) for 2 years, and 0.85 (0.80-0.90), 0.79 (0.76-0.82), 0.79 (0.76-0.82) for 3 years. External multimodal DLS emergent referable DR, maculopathy or either AUROC were 0.93 (0.88-0.97), 0.85 (0.80-0.89), 0.85 (0.76-0.85) for 1 year, 0.93 (0.89-0.97), 0.79 (0.74-0.84), 0.80 (0.76-0.85) for 2 years, and 0.91 (0.84-0.98), 0.79 (0.74-0.83), 0.79 (0.74-0.84) for 3 years., Conclusions: Multimodal and image DLS performance is significantly better than tabular DLS at all intervals. DLS accurately predict 1, 2 and 3 year emergent referable DR and referable maculopathy using colour fundal photographs, with additional risk factor characteristics conferring improvements in prognostic performance. Proposed DLS are a step towards individualised risk-based screening, whereby AI-assistance allows high-risk individuals to be closely monitored while reducing screening burden for low-risk individuals., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
13. Effect of socioeconomic deprivation as determined by the English deprivation deciles on the progression of diabetic retinopathy and maculopathy: a multivariate case-control analysis of 88 910 patients attending a South-East London diabetic eye screening service.
- Author
-
Giannakis P, Nderitu P, Nunez do Rio JM, Webster L, Mann S, Hopkins D, Cardoso MJ, Modat M, Bergeles C, and Jackson TL
- Subjects
- Humans, Male, Female, Middle Aged, London epidemiology, Risk Factors, Case-Control Studies, Disease Progression, Aged, Social Deprivation, Incidence, Diabetes Mellitus, Type 2 epidemiology, Macular Degeneration epidemiology, Macular Degeneration diagnosis, Retrospective Studies, Diabetic Retinopathy epidemiology, Diabetic Retinopathy diagnosis
- Abstract
Purpose: To determine associations between deprivation using the Index of Multiple Deprivation (IMD and individual IMD subdomains) with incident referable diabetic retinopathy/maculopathy (termed rDR)., Methods: Anonymised demographic and screening data collected by the South-East London Diabetic Eye Screening Programme were extracted from September 2013 to December 2019. Multivariable Cox proportional models were used to explore the association between the IMD, IMD subdomains and rDR., Results: From 118 508 people with diabetes who attended during the study period, 88 910 (75%) were eligible. The mean (± SD) age was 59.6 (±14.7) years; 53.94% were male, 52.58% identified as white, 94.28% had type 2 diabetes and the average duration of diabetes was 5.81 (±6.9) years; rDR occurred in 7113 patients (8.00%). Known risk factors of younger age, Black ethnicity, type 2 diabetes, more severe baseline DR and diabetes duration conferred a higher risk of incident rDR. After adjusting for these known risk factors, the multivariable analysis did not show a significant association between IMD (decile 1 vs decile 10) and rDR (HR: 1.08, 95% CI: 0.87 to 1.34, p=0.511). However, high deprivation (decile 1) in three IMD subdomains was associated with rDR, namely living environment (HR: 1.64, 95% CI: 1.12 to 2.41, p=0.011), education skills (HR: 1.64, 95% CI: 1.12 to 2.41, p=0.011) and income (HR: 1.19, 95% CI: 1.02 to 1.38, p=0.024)., Conclusion: IMD subdomains allow for the detection of associations between aspects of deprivation and rDR, which may be missed when using the aggregate IMD. The generalisation of these findings outside the UK population requires corroboration internationally., Competing Interests: Competing interests: PG, PN, JMNdR, LW, SM, DH, MJC, MM and CB has no conflicts of interest to declare. TLJ’ employer (King’s College Hospital) receives funding for participants enrolled on commercial clinical trials of diabetic retinopathy including THR149-002 (sponsor: OXURION), NEON NPDR (sponsor: BAYER), RHONE-X (sponsor: ROCHE) and ALTIMETER (sponsor: ROCHE). He is a paid advisor to solicitors acting for REGENERON and has received conference support from ROCHE., (© Author(s) (or their employer(s)) 2024. No commercial re-use. See rights and permissions. Published by BMJ.)
- Published
- 2024
- Full Text
- View/download PDF
14. Publisher Correction: The impact of the size and angle of the cochlear basal turn on translocation of a pre‑curved mid‑scala cochlear implant electrode.
- Author
-
Pai I, Connor S, Komninos C, Ourselin S, and Bergeles C
- Published
- 2024
- Full Text
- View/download PDF
15. Trocar localisation for robot-assisted vitreoretinal surgery.
- Author
-
Birch J, Da Cruz L, Rhode K, and Bergeles C
- Subjects
- Humans, Motion, Surgical Instruments, Robotics, Vitreoretinal Surgery, Robotic Surgical Procedures
- Abstract
Purpose: Robot-assisted vitreoretinal surgery provides precise and consistent operations on the back of the eye. To perform this safely, knowledge of the surgical instrument's remote centre of motion (RCM) and the location of the insertion point into the eye (trocar) is required. This enables the robot to align both positions to pivot the instrument about the trocar, thus preventing any damaging lateral forces from being exerted., Methods: Building on a system developed in previous work, this study presents a trocar localisation method that uses a micro-camera mounted on a vitreoretinal surgical forceps, to track two ArUco markers attached on either side of a trocar. The trocar position is the estimated midpoint between the markers., Results: Experimental evaluation of the trocar localisation was conducted. Results showed an RMSE of 1.82 mm for the localisation of the markers and an RMSE of 1.24 mm for the trocar localisation., Conclusions: The proposed camera-based trocar localisation presents reasonable consistency and accuracy and shows improved results compared to other current methods. Optimum accuracy for this application would necessitate a 1.4 mm absolute error margin, which corresponds to the trocar's radius. The trocar localisation results are successfully found within this margin, yet the marker localisation would require further refinement to ensure consistency of localisation within the error margin. Further work will refine these position estimates and ensure the error stays consistently within this boundary., (© 2023. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
16. Unpaired intra-operative OCT (iOCT) video super-resolution with contrastive learning.
- Author
-
Komninos C, Pissas T, Flores B, Bloch E, Vercauteren T, Ourselin S, Da Cruz L, and Bergeles C
- Abstract
Regenerative therapies show promise in reversing sight loss caused by degenerative eye diseases. Their precise subretinal delivery can be facilitated by robotic systems alongside with Intra-operative Optical Coherence Tomography (iOCT). However, iOCT's real-time retinal layer information is compromised by inferior image quality. To address this limitation, we introduce an unpaired video super-resolution methodology for iOCT quality enhancement. A recurrent network is proposed to leverage temporal information from iOCT sequences, and spatial information from pre-operatively acquired OCT images. Additionally, a patchwise contrastive loss enables unpaired super-resolution. Extensive quantitative analysis demonstrates that our approach outperforms existing state-of-the-art iOCT super-resolution models. Furthermore, ablation studies showcase the importance of temporal aggregation and contrastive loss in elevating iOCT quality. A qualitative study involving expert clinicians also confirms this improvement. The comprehensive evaluation demonstrates our method's potential to enhance the iOCT image quality, thereby facilitating successful guidance for regenerative therapies., Competing Interests: The authors declare no conflicts of interest., (© 2024 The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
17. The impact of the size and angle of the cochlear basal turn on translocation of a pre-curved mid-scala cochlear implant electrode.
- Author
-
Pai I, Connor S, Komninos C, Ourselin S, and Bergeles C
- Subjects
- Humans, Cochlea surgery, Electrodes, Implanted, Bionics, Translocation, Genetic, Cochlear Implants, Cochlear Implantation, Craniocerebral Trauma
- Abstract
Scalar translocation is a severe form of intra-cochlear trauma during cochlear implant (CI) electrode insertion. This study explored the hypothesis that the dimensions of the cochlear basal turn and orientation of its inferior segment relative to surgically relevant anatomical structures influence the scalar translocation rates of a pre-curved CI electrode. In a cohort of 40 patients implanted with the Advanced Bionics Mid-Scala electrode array, the scalar translocation group (40%) had a significantly smaller mean distance A of the cochlear basal turn (p < 0.001) and wider horizontal angle between the inferior segment of the cochlear basal turn and the mastoid facial nerve (p = 0.040). A logistic regression model incorporating distance A (p = 0.003) and horizontal facial nerve angle (p = 0.017) explained 44.0-59.9% of the variance in scalar translocation and correctly classified 82.5% of cases. Every 1mm decrease in distance A was associated with a 99.2% increase in odds of translocation [95% confidence interval 80.3%, 100%], whilst every 1-degree increase in the horizontal facial nerve angle was associated with an 18.1% increase in odds of translocation [95% CI 3.0%, 35.5%]. The study findings provide an evidence-based argument for the development of a navigation system for optimal angulation of electrode insertion during CI surgery to reduce intra-cochlear trauma., (© 2023. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
18. Semiautonomous Robotic Manipulator for Minimally Invasive Aortic Valve Replacement.
- Author
-
Tamadon I, Sadati SMH, Mamone V, Ferrari V, Bergeles C, and Menciassi A
- Abstract
Aortic valve surgery is the preferred procedure for replacing a damaged valve with an artificial one. The ValveTech robotic platform comprises a flexible articulated manipulator and surgical interface supporting the effective delivery of an artificial valve by teleoperation and endoscopic vision. This article presents our recent work on force-perceptive, safe, semiautonomous navigation of the ValveTech platform prior to valve implantation. First, we present a force observer that transfers forces from the manipulator body and tip to a haptic interface. Second, we demonstrate how hybrid forward/inverse mechanics, together with endoscopic visual servoing, lead to autonomous valve positioning. Benchtop experiments and an artificial phantom quantify the performance of the developed robot controller and navigator. Valves can be autonomously delivered with a 2.0±0.5 mm position error and a minimal misalignment of 3.4±0.9°. The hybrid force/shape observer (FSO) algorithm was able to predict distributed external forces on the articulated manipulator body with an average error of 0.09 N. FSO can also estimate loads on the tip with an average accuracy of 3.3%. The presented system can lead to better patient care, delivery outcome, and surgeon comfort during aortic valve surgery, without requiring sensorization of the robot tip, and therefore obviating miniaturization constraints.
- Published
- 2023
- Full Text
- View/download PDF
19. Comparative verification of control methodology for robotic interventional neuroradiology procedures.
- Author
-
Jackson B, Crinnion W, De Iturrate Reyzabal M, Robertshaw H, Bergeles C, Rhode K, and Booth T
- Subjects
- Humans, Catheterization methods, Catheters, Prolapse, Robotics, Endovascular Procedures, Robotic Surgical Procedures
- Abstract
Purpose: The use of robotics is emerging for performing interventional radiology procedures. Robots in interventional radiology are typically controlled using button presses and joystick movements. This study identified how different human-robot interfaces affect endovascular surgical performance using interventional radiology simulations., Methods: Nine participants performed a navigation task on an interventional radiology simulator with three different human-computer interfaces. Using Simulation Open Framework Architecture we developed a simulation profile of vessels, catheters and guidewires. We designed and manufactured a bespoke haptic interventional radiology controller for robotic systems to control the simulation. Metrics including time taken for navigation, number of incorrect catheterisations, number of catheter and guidewire prolapses and forces applied to vessel walls were measured and used to characterise the interfaces. Finally, participants responded to a questionnaire to evaluate the perception of the controllers., Results: Time taken for navigation, number of incorrect catheterisations and the number of catheter and guidewire prolapses, showed that the device-mimicking controller is better suited for controlling interventional neuroradiology procedures over joystick control approaches. Qualitative metrics also showed that interventional radiologists prefer a device-mimicking controller approach over a joystick approach., Conclusion: Of the four metrics used to compare and contrast the human-robot interfaces, three conclusively showed that a device-mimicking controller was better suited for controlling interventional neuroradiology robotics., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
20. Reduced order modeling and model order reduction for continuum manipulators: an overview.
- Author
-
Sadati SMH, Naghibi SE, da Cruz L, and Bergeles C
- Abstract
Soft robot's natural dynamics calls for the development of tailored modeling techniques for control. However, the high-dimensional configuration space of the geometrically exact modeling approaches for soft robots, i.e., Cosserat rod and Finite Element Methods (FEM), has been identified as a key obstacle in controller design. To address this challenge, Reduced Order Modeling (ROM), i.e., the approximation of the full-order models, and Model Order Reduction (MOR), i.e., reducing the state space dimension of a high fidelity FEM-based model, are enjoying extensive research. Although both techniques serve a similar purpose and their terms have been used interchangeably in the literature, they are different in their assumptions and implementation. This review paper provides the first in-depth survey of ROM and MOR techniques in the continuum and soft robotics landscape to aid Soft Robotics researchers in selecting computationally efficient models for their specific tasks., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2023 Sadati, Naghibi, da Cruz and Bergeles.)
- Published
- 2023
- Full Text
- View/download PDF
21. Multimodal testing reveals subclinical neurovascular dysfunction in prediabetes, challenging the diagnostic threshold of diabetes.
- Author
-
Kirthi V, Reed KI, Alattar K, Zuckerman BP, Bunce C, Nderitu P, Alam U, Clarke B, Hau S, Al-Shibani F, Petropoulos IN, Malik RA, Pissas T, Bergeles C, Vas P, Hopkins D, and Jackson TL
- Subjects
- Humans, Cross-Sectional Studies, Retina, Prediabetic State diagnosis, Diabetes Mellitus, Type 2 diagnosis, Retinal Diseases
- Abstract
Aim: To explore if novel non-invasive diagnostic technologies identify early small nerve fibre and retinal neurovascular pathology in prediabetes., Methods: Participants with normoglycaemia, prediabetes or type 2 diabetes underwent an exploratory cross-sectional analysis with optical coherence tomography angiography (OCT-A), handheld electroretinography (ERG), corneal confocal microscopy (CCM) and evaluation of electrochemical skin conductance (ESC)., Results: Seventy-five participants with normoglycaemia (n = 20), prediabetes (n = 29) and type 2 diabetes (n = 26) were studied. Compared with normoglycaemia, mean peak ERG amplitudes of retinal responses at low (16-Td·s: 4.05 μV, 95% confidence interval [95% CI] 0.96-7.13) and high (32-Td·s: 5·20 μV, 95% CI 1.54-8.86) retinal illuminance were lower in prediabetes, as were OCT-A parafoveal vessel densities in superficial (0.051 pixels/mm
2 , 95% CI 0.005-0.095) and deep (0.048 pixels/mm2 , 95% CI 0.003-0.093) retinal layers. There were no differences in CCM or ESC measurements between these two groups. Correlations between HbA1c and peak ERG amplitude at 32-Td·s (r = -0.256, p = 0.028), implicit time at 32-Td·s (r = 0.422, p < 0.001) and 16-Td·s (r = 0.327, p = 0.005), OCT parafoveal vessel density in the superficial (r = -0.238, p = 0.049) and deep (r = -0.3, p = 0.017) retinal layers, corneal nerve fibre length (CNFL) (r = -0.293, p = 0.017), and ESC-hands (r = -0.244, p = 0.035) were observed. HOMA-IR was a predictor of CNFD (β = -0.94, 95% CI -1.66 to -0.21, p = 0.012) and CNBD (β = -5.02, 95% CI -10.01 to -0.05, p = 0.048)., Conclusions: The glucose threshold for the diagnosis of diabetes is based on emergent retinopathy on fundus examination. We show that both abnormal retinal neurovascular structure (OCT-A) and function (ERG) may precede retinopathy in prediabetes, which require confirmation in larger, adequately powered studies., (© 2022 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.)- Published
- 2023
- Full Text
- View/download PDF
22. Towards A Physics-based Model for Steerable Eversion Growing Robots.
- Author
-
Wu Z, De Iturrate Reyzabal M, Sadati SMH, Liu H, Ourselin S, Leff D, Katzschmann RK, Rhode K, and Bergeles C
- Abstract
Soft robots that grow through eversion/apical extension can effectively navigate fragile environments such as ducts and vessels inside the human body. This paper presents the physics-based model of a miniature steerable eversion growing robot. We demonstrate the robot's growing, steering, stiffening and interaction capabilities. The interaction between two robot-internal components is explored, i.e., a steerable catheter for robot tip orientation, and a growing sheath for robot elongation/retraction. The behavior of the growing robot under different inner pressures and external tip forces is investigated. Simulations are carried out within the SOFA framework. Extensive experimentation with a physical robot setup demonstrates agreement with the simulations. The comparison demonstrates a mean absolute error of 10 - 20% between simulation and experimental results for curvature values, including catheter-only experiments, sheath-only experiments and full system experiments. To our knowledge, this is the first work to explore physics-based modelling of a tendon-driven steerable eversion growing robot. While our work is motivated by early breast cancer detection through mammary duct inspection and uses our MAMMOBOT robot prototype, our approach is general and relevant to similar growing robots.
- Published
- 2023
- Full Text
- View/download PDF
23. Using deep learning to detect diabetic retinopathy on handheld non-mydriatic retinal images acquired by field workers in community settings.
- Author
-
Nunez do Rio JM, Nderitu P, Raman R, Rajalakshmi R, Kim R, Rani PK, Sivaprasad S, and Bergeles C
- Subjects
- Humans, Diabetes Mellitus pathology, Mass Screening methods, Mydriatics, Photography methods, Prospective Studies, Retina diagnostic imaging, Sensitivity and Specificity, Deep Learning, Diabetic Retinopathy diagnostic imaging, Diagnostic Imaging methods
- Abstract
Diabetic retinopathy (DR) at risk of vision loss (referable DR) needs to be identified by retinal screening and referred to an ophthalmologist. Existing automated algorithms have mostly been developed from images acquired with high cost mydriatic retinal cameras and cannot be applied in the settings used in most low- and middle-income countries. In this prospective multicentre study, we developed a deep learning system (DLS) that detects referable DR from retinal images acquired using handheld non-mydriatic fundus camera by non-technical field workers in 20 sites across India. Macula-centred and optic-disc-centred images from 16,247 eyes (9778 participants) were used to train and cross-validate the DLS and risk factor based logistic regression models. The DLS achieved an AUROC of 0.99 (1000 times bootstrapped 95% CI 0.98-0.99) using two-field retinal images, with 93.86 (91.34-96.08) sensitivity and 96.00 (94.68-98.09) specificity at the Youden's index operational point. With single field inputs, the DLS reached AUROC of 0.98 (0.98-0.98) for the macula field and 0.96 (0.95-0.98) for the optic-disc field. Intergrader performance was 90.01 (88.95-91.01) sensitivity and 96.09 (95.72-96.42) specificity. The image based DLS outperformed all risk factor-based models. This DLS demonstrated a clinically acceptable performance for the identification of referable DR despite challenging image capture conditions., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
24. Automated image curation in diabetic retinopathy screening using deep learning.
- Author
-
Nderitu P, Nunez do Rio JM, Webster ML, Mann SS, Hopkins D, Cardoso MJ, Modat M, Bergeles C, and Jackson TL
- Subjects
- Humans, Mass Screening methods, Retina diagnostic imaging, Deep Learning, Diabetes Mellitus, Diabetic Retinopathy diagnostic imaging, Macula Lutea
- Abstract
Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
25. Robotics in neurointerventional surgery: a systematic review of the literature.
- Author
-
Crinnion W, Jackson B, Sood A, Lynch J, Bergeles C, Liu H, Rhode K, Mendes Pereira V, and Booth TC
- Subjects
- Cerebral Angiography, Humans, Vascular Surgical Procedures, Robotics
- Abstract
Background: Robotically performed neurointerventional surgery has the potential to reduce occupational hazards to staff, perform intervention with greater precision, and could be a viable solution for teleoperated neurointerventional procedures., Objective: To determine the indication, robotic systems used, efficacy, safety, and the degree of manual assistance required for robotically performed neurointervention., Methods: We conducted a systematic review of the literature up to, and including, articles published on April 12, 2021. Medline, PubMed, Embase, and Cochrane register databases were searched using medical subject heading terms to identify reports of robotically performed neurointervention, including diagnostic cerebral angiography and carotid artery intervention., Results: A total of 8 articles treating 81 patients were included. Only one case report used a robotic system for intracranial intervention, the remaining indications being cerebral angiography and carotid artery intervention. Only one study performed a comparison of robotic and manual procedures. Across all studies, the technical success rate was 96% and the clinical success rate was 100%. All cases required a degree of manual assistance. No studies had clearly defined patient selection criteria, reference standards, or index tests, preventing meaningful statistical analysis., Conclusions: Given the clinical success, it is plausible that robotically performed neurointerventional procedures will eventually benefit patients and reduce occupational hazards for staff; however, there is no high-level efficacy and safety evidence to support this assertion. Limitations of current robotic systems and the challenges that must be overcome to realize the potential for remote teleoperated neurointervention require further investigation., Competing Interests: Competing interests: VMP has acted as a consultant for Corindus and is an investigator in the Corpath GRX Neuro Study. The remaining authors have no conflicts of interest to declare., (© Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY. Published by BMJ.)
- Published
- 2022
- Full Text
- View/download PDF
26. Surgical biomicroscopy-guided intra-operative optical coherence tomography (iOCT) image super-resolution.
- Author
-
Komninos C, Pissas T, Mekki L, Flores B, Bloch E, Vercauteren T, Ourselin S, Da Cruz L, and Bergeles C
- Subjects
- Cross-Sectional Studies, Humans, Slit Lamp, Retina diagnostic imaging, Retina surgery, Tomography, Optical Coherence methods
- Abstract
Purpose: Intra-retinal delivery of novel sight-restoring therapies will require the precision of robotic systems accompanied by excellent visualisation of retinal layers. Intra-operative Optical Coherence Tomography (iOCT) provides cross-sectional retinal images in real time but at the cost of image quality that is insufficient for intra-retinal therapy delivery.This paper proposes a super-resolution methodology that improves iOCT image quality leveraging spatiotemporal consistency of incoming iOCT video streams., Methods: To overcome the absence of ground truth high-resolution (HR) images, we first generate HR iOCT images by fusing spatially aligned iOCT video frames. Then, we automatically assess the quality of the HR images on key retinal layers using a deep semantic segmentation model. Finally, we use image-to-image translation models (Pix2Pix and CycleGAN) to enhance the quality of LR images via quality transfer from the estimated HR domain., Results: Our proposed methodology generates iOCT images of improved quality according to both full-reference and no-reference metrics. A qualitative study with expert clinicians also confirms the improvement in the delineation of pertinent layers and in the reduction of artefacts. Furthermore, our approach outperforms conventional denoising filters and the learning-based state-of-the-art., Conclusions: The results indicate that the learning-based methods using the estimated, through our pipeline, HR domain can be used to enhance the iOCT image quality. Therefore, the proposed method can computationally augment the capabilities of iOCT imaging helping this modality support the vitreoretinal surgical interventions of the future., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
27. Deep homography estimation in dynamic surgical scenes for laparoscopic camera motion extraction.
- Author
-
Huber M, Ourselin S, Bergeles C, and Vercauteren T
- Abstract
Current laparoscopic camera motion automation relies on rule-based approaches or only focuses on surgical tools. Imitation Learning (IL) methods could alleviate these shortcomings, but have so far been applied to oversimplified setups. Instead of extracting actions from oversimplified setups, in this work we introduce a method that allows to extract a laparoscope holder's actions from videos of laparoscopic interventions. We synthetically add camera motion to a newly acquired dataset of camera motion free da Vinci surgery image sequences through a novel homography generation algorithm . The synthetic camera motion serves as a supervisory signal for camera motion estimation that is invariant to object and tool motion. We perform an extensive evaluation of state-of-the-art (SOTA) Deep Neural Networks (DNNs) across multiple compute regimes, finding our method transfers from our camera motion free da Vinci surgery dataset to videos of laparoscopic interventions, outperforming classical homography estimation approaches in both, precision by 41 % , and runtime on a CPU by 43 % ., Competing Interests: TV is supported by a Medtronic/RAEng Research Chair [Royal Academy of Engineering RCSRF1819\7\34]. SO and TV are are co-founders and shareholders of Hypervision Surgical. TV holds shares from Mauna Kea Technologies., (© 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.)
- Published
- 2022
- Full Text
- View/download PDF
28. OxVent: Design and evaluation of a rapidly-manufactured Covid-19 ventilator.
- Author
-
Beale R, Rosendo JB, Bergeles C, Beverly A, Camporota L, Castrejón-Pita AA, Crockett DC, Cronin JN, Denison T, East S, Edwardes C, Farmery AD, Fele F, Fisk J, Fuenteslópez CV, Garstka M, Goulart P, Heaysman C, Hussain A, Jha P, Kempf I, Kumar AS, Möslein A, Orr ACJ, Ourselin S, Salisbury D, Seneci C, Staruch R, Steel H, Thompson M, Tran MC, Vitiello V, Xochicale M, Zhou F, Formenti F, and Kirk T
- Subjects
- Animals, COVID-19 pathology, COVID-19 prevention & control, COVID-19 virology, Female, Male, Respiratory Rate, SARS-CoV-2 isolation & purification, Swine, Tidal Volume, Equipment Design, Respiration, Artificial instrumentation
- Abstract
Background: The manufacturing of any standard mechanical ventilator cannot rapidly be upscaled to several thousand units per week, largely due to supply chain limitations. The aim of this study was to design, verify and perform a pre-clinical evaluation of a mechanical ventilator based on components not required for standard ventilators, and that met the specifications provided by the Medicines and Healthcare Products Regulatory Agency (MHRA) for rapidly-manufactured ventilator systems (RMVS)., Methods: The design utilises closed-loop negative feedback control, with real-time monitoring and alarms. Using a standard test lung, we determined the difference between delivered and target tidal volume (VT) at respiratory rates between 20 and 29 breaths per minute, and the ventilator's ability to deliver consistent VT during continuous operation for >14 days (RMVS specification). Additionally, four anaesthetised domestic pigs (3 male-1 female) were studied before and after lung injury to provide evidence of the ventilator's functionality, and ability to support spontaneous breathing., Findings: Continuous operation lasted 23 days, when the greatest difference between delivered and target VT was 10% at inspiratory flow rates >825 mL/s. In the pre-clinical evaluation, the VT difference was -1 (-90 to 88) mL [mean (LoA)], and positive end-expiratory pressure (PEEP) difference was -2 (-8 to 4) cmH
2 O. VT delivery being triggered by pressures below PEEP demonstrated spontaneous ventilation support., Interpretation: The mechanical ventilator presented meets the MHRA therapy standards for RMVS and, being based on largely available components, can be manufactured at scale., Funding: Work supported by Wellcome/EPSRC Centre for Medical Engineering,King's Together Fund and Oxford University., Competing Interests: Declaration of interests FFo reports grants from the National Institute for Health Research (UK), the National Institute of Academic Anaesthesia, and the Wellcome/EPSRC Centre for Medical Engineering. AF, FFo, SO and MT are volunteering directors of OxVent, a joint-venture social enterprise for mechanical ventilation between Oxford University and King's College London. TD is on the advisory board of OxVent. AAC-P, AF, FFo, MT, PG, SO and TD have shares in OxVent Ltd. AH and CVF are part-time employees of OxVent Ltd., (Copyright © 2022 The Authors. Published by Elsevier B.V. All rights reserved.)- Published
- 2022
- Full Text
- View/download PDF
29. Evaluating a Deep Learning Diabetic Retinopathy Grading System Developed on Mydriatic Retinal Images When Applied to Non-Mydriatic Community Screening.
- Author
-
Nunez do Rio JM, Nderitu P, Bergeles C, Sivaprasad S, Tan GSW, and Raman R
- Abstract
Artificial Intelligence has showcased clear capabilities to automatically grade diabetic retinopathy (DR) on mydriatic retinal images captured by clinical experts on fixed table-top retinal cameras within hospital settings. However, in many low- and middle-income countries, screening for DR revolves around minimally trained field workers using handheld non-mydriatic cameras in community settings. This prospective study evaluated the diagnostic accuracy of a deep learning algorithm developed using mydriatic retinal images by the Singapore Eye Research Institute, commercially available as Zeiss VISUHEALTH-AI DR, on images captured by field workers on a Zeiss Visuscout
® 100 non-mydriatic handheld camera from people with diabetes in a house-to-house cross-sectional study across 20 regions in India. A total of 20,489 patient eyes from 11,199 patients were used to evaluate algorithm performance in identifying referable DR, non-referable DR, and gradability. For each category, the algorithm achieved precision values of 29.60 (95% CI 27.40, 31.88), 92.56 (92.13, 92.97), and 58.58 (56.97, 60.19), recall values of 62.69 (59.17, 66.12), 85.65 (85.11, 86.18), and 65.06 (63.40, 66.69), and F-score values of 40.22 (38.25, 42.21), 88.97 (88.62, 89.31), and 61.65 (60.50, 62.80), respectively. Model performance reached 91.22 (90.79, 91.64) sensitivity and 65.06 (63.40, 66.69) specificity at detecting gradability and 72.08 (70.68, 73.46) sensitivity and 85.65 (85.11, 86.18) specificity for the detection of all referable eyes. Algorithm accuracy is dependent on the quality of acquired retinal images, and this is a major limiting step for its global implementation in community non-mydriatic DR screening using handheld cameras. This study highlights the need to develop and train deep learning-based screening tools in such conditions before implementation.- Published
- 2022
- Full Text
- View/download PDF
30. Intra-operative OCT (iOCT) Super Resolution: a Two-Stage Methodology Leveraging High Quality Pre-operative OCT Scans.
- Author
-
Komninos C, Pissas T, Flores B, Bloch E, Vercauteren T, Ourselin S, Da Cruz L, and Bergeles C
- Abstract
Regenerative therapies have recently shown potential in restoring sight lost due to degenerative diseases. Their efficacy requires precise intra-retinal delivery, which can be achieved by robotic systems accompanied by high quality visualization of retinal layers. Intra-operative Optical Coherence Tomography (iOCT) captures cross-sectional retinal images in real-time but with image quality that is inadequate for intra-retinal therapy delivery. This paper proposes a two-stage super-resolution methodology that enhances the image quality of the low resolution (LR) iOCT images leveraging information from pre-operatively acquired high-resolution (HR) OCT (preOCT) images. First, we learn the degradation process from HR to LR domain through CycleGAN and use it to generate pseudo iOCT (LR) images from the HR preOCT ones. Then, we train a Pix2Pix model on the pairs of pseudo iOCT and preOCT to learn the super-resolution mapping. Quantitative analysis using both full-reference and no-reference image quality metrics demonstrates that our approach clearly outperforms the learning-based state-of-the art techniques with statistical significance. Achieving iOCT image quality comparable to pre-OCT quality can help this medical imaging modality be established in vitreoretinal surgery, without requiring expensive hardware-related system updates.
- Published
- 2022
- Full Text
- View/download PDF
31. Gripe-Needle : A Sticky Suction Cup Gripper Equipped Needle for Targeted Therapeutics Delivery.
- Author
-
Joymungul K, Mitros Z, da Cruz L, Bergeles C, and Sadati SMH
- Abstract
This paper presents a multi-purpose gripping and incision tool-set to reduce the number of required manipulators for targeted therapeutics delivery in Minimally Invasive Surgery. We have recently proposed the use of multi-arm Concentric Tube Robots (CTR) consisting of an incision, a camera, and a gripper manipulator for deep orbital interventions, with a focus on Optic Nerve Sheath Fenestration (ONSF). The proposed prototype in this research, called Gripe-Needle , is a needle equipped with a sticky suction cup gripper capable of performing both gripping of target tissue and incision tasks in the optic nerve area by exploiting the multi-tube arrangement of a CTR for actuation of the different tool-set units. As a result, there will be no need for an independent gripper arm for an incision task. The CTR innermost tube is equipped with a needle, providing the pathway for drug delivery, and the immediate outer tube is attached to the suction cup, providing the suction pathway. Based on experiments on various materials, we observed that adding a sticky surface with bio-inspired grooves to a normal suction cup gripper has many advantages such as, 1) enhanced adhesion through material stickiness and by air-tightening the contact surface, 2) maintained adhesion despite internal pressure variations, e.g. due to the needle motion, and 3) sliding resistance. Simple Finite Element and theoretical modeling frameworks are proposed, based on which a miniature tool-set is designed to achieve the required gripping forces during ONSF. The final designs were successfully tested for accessing the optic nerve of a realistic eye phantom in a skull eye orbit, robust gripping and incision on units of a plastic bubble wrap sample, and manipulating different tissue types of porcine eye samples., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2021 Joymungul, Mitros, da Cruz, Bergeles and Sadati.)
- Published
- 2021
- Full Text
- View/download PDF
32. Homography-based Visual Servoing with Remote Center of Motion for Semi-autonomous Robotic Endoscope Manipulation.
- Author
-
Huber M, Mitchell JB, Henry R, Ourselin S, Vercauteren T, and Bergeles C
- Abstract
The dominant visual servoing approaches in Minimally Invasive Surgery (MIS) follow single points or adapt the endoscope's field of view based on the surgical tools' distance. These methods rely on point positions with respect to the camera frame to infer a control policy. Deviating from the dominant methods, we formulate a robotic controller that allows for image-based visual servoing that requires neither explicit tool and camera positions nor any explicit image depth information. The proposed method relies on homography-based image registration, which changes the automation paradigm from point-centric towards surgical-scene-centric approach. It simultaneously respects a programmable Remote Center of Motion (RCM). Our approach allows a surgeon to build a graph of desired views, from which, once built, views can be manually selected and automatically servoed to irrespective of robot-patient frame transformation changes. We evaluate our method on an abdominal phantom and provide an open source ROS Moveit integration for use with any serial manipulator. A video is provided.
- Published
- 2021
- Full Text
- View/download PDF
33. Deep learning for gradability classification of handheld, non-mydriatic retinal images.
- Author
-
Nderitu P, do Rio JMN, Rasheed R, Raman R, Rajalakshmi R, Bergeles C, and Sivaprasad S
- Subjects
- Cross-Sectional Studies, Deep Learning, Female, Humans, India, Male, Mass Screening methods, Middle Aged, Mydriatics administration & dosage, Photography methods, ROC Curve, Sensitivity and Specificity, Diabetic Retinopathy diagnosis, Diabetic Retinopathy diagnostic imaging, Retina diagnostic imaging
- Abstract
Screening effectively identifies patients at risk of sight-threatening diabetic retinopathy (STDR) when retinal images are captured through dilated pupils. Pharmacological mydriasis is not logistically feasible in non-clinical, community DR screening, where acquiring gradable retinal images using handheld devices exhibits high technical failure rates, reducing STDR detection. Deep learning (DL) based gradability predictions at acquisition could prompt device operators to recapture insufficient quality images, increasing gradable image proportions and consequently STDR detection. Non-mydriatic retinal images were captured as part of SMART India, a cross-sectional, multi-site, community-based, house-to-house DR screening study between August 2018 and December 2019 using the Zeiss Visuscout 100 handheld camera. From 18,277 patient eyes (40,126 images), 16,170 patient eyes (35,319 images) were eligible and 3261 retinal images (1490 patient eyes) were sampled then labelled by two ophthalmologists. Compact DL model area under the receiver operator characteristic curve was 0.93 (0.01) following five-fold cross-validation. Compact DL model agreement (Kappa) were 0.58, 0.69 and 0.69 for high specificity, balanced sensitivity/specificity and high sensitivity operating points compared to an inter-grader agreement of 0.59. Compact DL gradability model performance was favourable compared to ophthalmologists. Compact DL models can effectively classify non-mydriatic, handheld retinal image gradability with potential applications within community-based DR screening.
- Published
- 2021
- Full Text
- View/download PDF
34. Design and Modelling of a Continuum Robot for Distal Lung Sampling in Mechanically Ventilated Patients in Critical Care.
- Author
-
Mitros Z, Thamo B, Bergeles C, da Cruz L, Dhaliwal K, and Khadem M
- Abstract
In this paper, we design and develop a novel robotic bronchoscope for sampling of the distal lung in mechanically-ventilated (MV) patients in critical care units. Despite the high cost and attributable morbidity and mortality of MV patients with pneumonia which approaches 40%, sampling of the distal lung in MV patients suffering from range of lung diseases such as Covid-19 is not standardised, lacks reproducibility and requires expert operators. We propose a robotic bronchoscope that enables repeatable sampling and guidance to distal lung pathologies by overcoming significant challenges that are encountered whilst performing bronchoscopy in MV patients, namely, limited dexterity, large size of the bronchoscope obstructing ventilation, and poor anatomical registration. We have developed a robotic bronchoscope with 7 Degrees of Freedom (DoFs), an outer diameter of 4.5 mm and inner working channel of 2 mm. The prototype is a push/pull actuated continuum robot capable of dexterous manipulation inside the lung and visualisation/sampling of the distal airways. A prototype of the robot is engineered and a mechanics-based model of the robotic bronchoscope is developed. Furthermore, we develop a novel numerical solver that improves the computational efficiency of the model and facilitates the deployment of the robot. Experiments are performed to verify the design and evaluate accuracy and computational cost of the model. Results demonstrate that the model can predict the shape of the robot in <0.011s with a mean error of 1.76 cm, enabling the future deployment of a robotic bronchoscope in MV patients., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2021 Mitros, Thamo, Bergeles, da Cruz, Dhaliwal and Khadem.)
- Published
- 2021
- Full Text
- View/download PDF
35. Deep Learning-Based Segmentation and Quantification of Retinal Capillary Non-Perfusion on Ultra-Wide-Field Retinal Fluorescein Angiography.
- Author
-
Nunez do Rio JM, Sen P, Rasheed R, Bagchi A, Nicholson L, Dubis AM, Bergeles C, and Sivaprasad S
- Abstract
Reliable outcome measures are required for clinical trials investigating novel agents for preventing progression of capillary non-perfusion (CNP) in retinal vascular diseases. Currently, accurate quantification of topographical distribution of CNP on ultrawide field fluorescein angiography (UWF-FA) by retinal experts is subjective and lack standardisation. A U-net style network was trained to extract a dense segmentation of CNP from a newly created dataset of 75 UWF-FA images. A subset of 20 images was also segmented by a second expert grader for inter-grader reliability evaluation. Further, a circular grid centred on the FAZ was used to provide standardised CNP distribution analysis. The model for dense segmentation was five-fold cross-validated achieving area under the receiving operating characteristic of 0.82 (0.03) and area under precision-recall curve 0.73 (0.05). Inter-grader assessment on the 20 image subset achieves: precision 59.34 (10.92), recall 76.99 (12.5), and dice similarity coefficient (DSC) 65.51 (4.91), and the centred operating point of the automated model reached: precision 64.41 (13.66), recall 70.02 (16.2), and DSC 66.09 (13.32). Agreement of CNP grid assessment reached: Kappa 0.55 (0.03), perfused intraclass correlation (ICC) 0.89 (0.77, 0.93), non-perfused ICC 0.86 (0.73, 0.92), inter-grader agreement of CNP grid assessment values are Kappa 0.43 (0.03), perfused ICC 0.70 (0.48, 0.83), non-perfused ICC 0.71 (0.48, 0.83). Automated dense segmentation of CNP in UWF-FA images achieves performance levels comparable to inter-grader agreement values. A grid placed on the deep learning-based automatic segmentation of CNP generates a reliable and quantifiable method of measurement of CNP, to overcome the subjectivity of human graders.
- Published
- 2020
- Full Text
- View/download PDF
36. Optic Nerve Sheath Fenestration With a Multi-Arm Continuum Robot.
- Author
-
Mitros Z, Sadati S, Seneci C, Bloch E, Leibrandt K, Khadem M, da Cruz L, and Bergeles C
- Abstract
This article presents a medical robotic system for deep orbital interventions, with a focus on Optic Nerve Sheath Fenestration (ONSF). ONSF is a currently invasive ophthalmic surgical approach that can reduce potentially blinding elevated hydrostatic intracranial pressure on the optic disc via an incision on the optic nerve. The prototype is a multi-arm system capable of dexterous manipulation and visualization of the optic nerve area, allowing for a minimally invasive approach. Each arm is an independently controlled concentric tube robot collimated by a bespoke guide that is secured on the eye sclera via sutures. In this article, we consider the robot's end-effector design in order to reach/navigate the optic nerve according to the clinical requirements of ONSF. A prototype of the robot was engineered, and its ability to penetrate the optic nerve was analysed by conducting ex vivo experiments on porcine optic nerves and comparing their stiffness to human ones. The robot was successfully deployed in a custom-made realistic eye phantom. Our simulation studies and experimental results demonstrate that the robot can successfully navigate to the operation site and carry out the intervention.
- Published
- 2020
- Full Text
- View/download PDF
37. Learned optical flow for intra-operative tracking of the retinal fundus.
- Author
-
Ravasio CS, Pissas T, Bloch E, Flores B, Jalali S, Stoyanov D, Cardoso JM, Da Cruz L, and Bergeles C
- Subjects
- Algorithms, Humans, Deep Learning, Neural Networks, Computer, Retina surgery
- Abstract
Purpose: Sustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools., Methods: As manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases., Results: The U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos., Conclusions: The results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.
- Published
- 2020
- Full Text
- View/download PDF
38. Deep iterative vessel segmentation in OCT angiography.
- Author
-
Pissas T, Bloch E, Cardoso MJ, Flores B, Georgiadis O, Jalali S, Ravasio C, Stoyanov D, Da Cruz L, and Bergeles C
- Abstract
This paper addresses retinal vessel segmentation on optical coherence tomography angiography (OCT-A) images of the human retina. Our approach is motivated by the need for high precision image-guided delivery of regenerative therapies in vitreo-retinal surgery. OCT-A visualizes macular vasculature, the main landmark of the surgically targeted area, at a level of detail and spatial extent unattainable by other imaging modalities. Thus, automatic extraction of detailed vessel maps can ultimately inform surgical planning. We address the task of delineation of the Superficial Vascular Plexus in 2D Maximum Intensity Projections (MIP) of OCT-A using convolutional neural networks that iteratively refine the quality of the produced vessel segmentations. We demonstrate that the proposed approach compares favourably to alternative network baselines and graph-based methodologies through extensive experimental analysis, using data collected from 50 subjects, including both individuals that underwent surgery for structural macular abnormalities and healthy subjects. Additionally, we demonstrate generalization to 3D segmentation and narrower field-of-view OCT-A. In the future, the extracted vessel maps will be leveraged for surgical planning and semi-automated intraoperative navigation in vitreo-retinal surgery., Competing Interests: The authors declare no conflicts of interest., (Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.)
- Published
- 2020
- Full Text
- View/download PDF
39. Bio-compatible Piezoresistive Pressure Sensing Skin Sleeve for Millimetre-Scale Flexible Robots: Design, Manufacturing and Pitfalls.
- Author
-
Wasylczyk P, Ozimek F, Tiwari MK, da Cruz L, and Bergeles C
- Subjects
- Electronics, Equipment Design, Humans, Robotics, Sensation, Skin
- Abstract
Safe interactions between humans and robots require the robotic arms and/or tools to recognize and react to the surrounding environment via pressure sensing. With small-scale surgical interventions in mind, we have developed a flexible skin with tens of pressure sensing elements, designed to cover a 5mm diameter tool. The prototype uses only biocompatible materials: soft silicones, carbon powder and metal wires. The material performance, sensing element design, manufacturing technology, and the readout electronics are described. Our prototype demonstrates the feasibility of using this technology in various intervention scenarios, from endoscopic navigation to tissue manipulation. We conclude by identifying research directions that maximise the potential of the proposed technology.
- Published
- 2019
- Full Text
- View/download PDF
40. Fast adaptive optics scanning light ophthalmoscope retinal montaging.
- Author
-
Davidson B, Kalitzeos A, Carroll J, Dubra A, Ourselin S, Michaelides M, and Bergeles C
- Abstract
The field of view of high-resolution ophthalmoscopes that require the use of adaptive optics (AO) wavefront correction is limited by the isoplanatic patch of the eye, which varies across individual eyes and with the portion of the pupil used for illumination and/or imaging. Therefore all current AO ophthalmoscopes have small fields of view comparable to, or smaller than, the isoplanatic patch, and the resulting images have to be stitched off-line to create larger montages. These montages are currently assembled either manually, by expert human graders, or automatically, often requiring several hours per montage. This arguably limits the applicability of AO ophthalmoscopy to studies with small cohorts and moreover, prevents the ability to review a real-time captured montage of all locations during image acquisition to further direct targeted imaging. In this work, we propose stitching the images with our novel algorithm, which uses oriented fast rotated brief (ORB) descriptors, local sensitivity hashing, and by searching for a 'good enough' transformation, rather than the best possible, to achieve processing times of 1-2 minutes per montage of 250 images. Moreover, the proposed method produces montages which are as accurate as previous methods, when considering the image similarity metrics: normalised mutual information (NMI), and normalised cross correlation (NCC)., Competing Interests: The authors declare that there are no conflicts of interest related to this article.
- Published
- 2018
- Full Text
- View/download PDF
41. Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning.
- Author
-
Davidson B, Kalitzeos A, Carroll J, Dubra A, Ourselin S, Michaelides M, and Bergeles C
- Subjects
- Case-Control Studies, Humans, Macular Degeneration metabolism, Macular Degeneration pathology, Neural Networks, Computer, Retina cytology, Retinal Cone Photoreceptor Cells cytology, Stargardt Disease, Visual Acuity, Algorithms, Deep Learning, Macular Degeneration congenital, Retina metabolism, Retinal Cone Photoreceptor Cells metabolism
- Abstract
We present a robust deep learning framework for the automatic localisation of cone photoreceptor cells in Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) split-detection images. Monitoring cone photoreceptors with AOSLO imaging grants an excellent view into retinal structure and health, provides new perspectives into well known pathologies, and allows clinicians to monitor the effectiveness of experimental treatments. The MultiDimensional Recurrent Neural Network (MDRNN) approach developed in this paper is the first method capable of reliably and automatically identifying cones in both healthy retinas and retinas afflicted with Stargardt disease. Therefore, it represents a leap forward in the computational image processing of AOSLO images, and can provide clinical support in on-going longitudinal studies of disease progression and therapy. We validate our method using images from healthy subjects and subjects with the inherited retinal pathology Stargardt disease, which significantly alters image quality and cone density. We conduct a thorough comparison of our method with current state-of-the-art methods, and demonstrate that the proposed approach is both more accurate and appreciably faster in localizing cones. As further validation to the method's robustness, we demonstrate it can be successfully applied to images of retinas with pathologies not present in the training data: achromatopsia, and retinitis pigmentosa.
- Published
- 2018
- Full Text
- View/download PDF
42. Unsupervised identification of cone photoreceptors in non-confocal adaptive optics scanning light ophthalmoscope images.
- Author
-
Bergeles C, Dubis AM, Davidson B, Kasilian M, Kalitzeos A, Carroll J, Dubra A, Michaelides M, and Ourselin S
- Abstract
Precise measurements of photoreceptor numerosity and spatial arrangement are promising biomarkers for the early detection of retinal pathologies and may be valuable in the evaluation of retinal therapies. Adaptive optics scanning light ophthalmoscopy (AOSLO) is a method of imaging that corrects for aberrations of the eye to acquire high-resolution images that reveal the photoreceptor mosaic. These images are typically graded manually by experienced observers, obviating the robust, large-scale use of the technology. This paper addresses unsupervised automated detection of cones in non-confocal, split-detection AOSLO images. Our algorithm leverages the appearance of split-detection images to create a cone model that is used for classification. Results show that it compares favorably to the state-of-the-art, both for images of healthy retinas and for images from patients affected by Stargardt disease. The algorithm presented also compares well to manual annotation while excelling in speed.
- Published
- 2017
- Full Text
- View/download PDF
43. A Continuum Robot and Control Interface for Surgical Assist in Fetoscopic Interventions.
- Author
-
Dwyer G, Chadebecq F, Tella Amo M, Bergeles C, Maneas E, Pawar V, Vander Poorten E, Deprest J, Ourselin S, De Coppi P, Vercauteren T, and Stoyanov D
- Abstract
Twin-twin transfusion syndrome requires interventional treatment using a fetoscopically introduced laser to sever the shared blood supply between the fetuses. This is a delicate procedure relying on small instrumentation with limited articulation to guide the laser tip and a narrow field of view to visualize all relevant vascular connections. In this letter, we report on a mechatronic design for a comanipulated instrument that combines concentric tube actuation to a larger manipulator constrained by a remote centre of motion. A stereoscopic camera is mounted at the distal tip and used for imaging. Our mechanism provides enhanced dexterity and stability of the imaging device. We demonstrate that the imaging system can be used for computing geometry and enhancing the view at the operating site. Results using electromagnetic sensors for verification and comparison to visual odometry from the distal sensor show that our system is promising and can be developed further for multiple clinical needs in fetoscopic procedures.
- Published
- 2017
- Full Text
- View/download PDF
44. Adaptive Nonparametric Kinematic Modeling of Concentric Tube Robots.
- Author
-
Fagogenis G, Bergeles C, and Dupont PE
- Abstract
Concentric tube robots comprise telescopic precurved elastic tubes. The robot's tip and shape are controlled via relative tube motions, i.e. tube rotations and translations. Non-linear interactions between the tubes, e.g. friction and torsion, as well as uncertainty in the physical properties of the tubes themselves, e.g. the Young's modulus, curvature, or stiffness, hinder accurate kinematic modelling. In this paper, we present a machine-learning-based methodology for kinematic modelling of concentric tube robots and in situ model adaptation. Our approach is based on Locally Weighted Projection Regression (LWPR). The model comprises an ensemble of linear models, each of which locally approximates the original complex kinematic relation. LWPR can accommodate for model deviations by adjusting the respective local models at run-time, resulting in an adaptive kinematics framework. We evaluated our approach on data gathered from a three-tube robot, and report high accuracy across the robot's configuration space.
- Published
- 2016
- Full Text
- View/download PDF
45. Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints.
- Author
-
Bergeles C, Gosline AH, Vasilyev NV, Codd PJ, Del Nido PJ, and Dupont PE
- Abstract
Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery.
- Published
- 2015
- Full Text
- View/download PDF
46. Biometry-based concentric tubes robot for vitreoretinal surgery.
- Author
-
Lin FY, Bergeles C, and Yang GZ
- Subjects
- Humans, Sutures, Vitreoretinal Surgery adverse effects, Biometry, Robotics instrumentation, Vitreoretinal Surgery instrumentation
- Abstract
Vitreoretinal surgery requires dexterous manoeuvres of tiny surgical tools in the confined cavity of the human eye through incisions made on the sclera. The fulcrum effect stemming from these incisions limits the safely reachable intraocular workspace and may result in scleral stress and collision with the intraocular lens. This paper proposes a concentric tube robot for panretinal interventions without risking scleral or lens damage. The robot is designed based on biometric measurements of the human eye, the required workspace, and the ease of incorporation in the clinical workflow. Our system is suited to 23 G vitreoretinal surgery, which does not require post-operative suturing, by comprising sub-millimetre concentric tubes. The proposed design is modular and features a rapid tube-exchange mechanism. To grasp and manipulate tissue, a sub-millimetre flexible gripper is fabricated. Experiments demonstrate the ability to reach peripheral retinal regions with limited motion at the incision point and no risk of lens contact.
- Published
- 2015
- Full Text
- View/download PDF
47. From passive tool holders to microsurgeons: safer, smaller, smarter surgical robots.
- Author
-
Bergeles C and Yang GZ
- Subjects
- Humans, Robotics instrumentation, Microsurgery instrumentation, Minimally Invasive Surgical Procedures instrumentation, Robotic Surgical Procedures instrumentation
- Abstract
Within only a few decades from its initial introduction, the field of surgical robotics has evolved into a dynamic and rapidly growing research area with increasing clinical uptake worldwide. Initially introduced for stereotaxic neurosurgery, surgical robots are now involved in an increasing number of procedures, demonstrating their practical clinical potential while propelling further advances in surgical innovations. Emerging platforms are also able to perform complex interventions through only a single-entry incision, and navigate through natural anatomical pathways in a tethered or wireless fashion. New devices facilitate superhuman dexterity and enable the performance of surgical steps that are otherwise impossible. They also allow seamless integration of microimaging techniques at the cellular level, significantly expanding the capabilities of surgeons. This paper provides an overview of the significant achievements in surgical robotics and identifies the current trends and future research directions of the field in making surgical robots safer, smaller, and smarter.
- Published
- 2014
- Full Text
- View/download PDF
48. Multi-view stereo and advanced navigation for transanal endoscopic microsurgery.
- Author
-
Bergeles C, Pratt P, Merrifield R, Darzi A, and Yang GZ
- Subjects
- Humans, Image Enhancement methods, Image Interpretation, Computer-Assisted methods, Reproducibility of Results, Sensitivity and Specificity, Algorithms, Colonoscopy methods, Imaging, Three-Dimensional methods, Microsurgery methods, Pattern Recognition, Automated methods, Rectal Neoplasms pathology, Rectal Neoplasms surgery, Surgery, Computer-Assisted methods
- Abstract
Transanal endoscopic microsurgery (TEM), i.e., the local excision of rectal carcinomas by way of a bimanual operating system with magnified binocular vision, is gaining acceptance in lieu of more radical total interventions. A major issue with this approach is the lack of information on submucosal anatomical structures. This paper presents an advanced navigation system, wherein the intraoperative 3D structure is stably estimated from multiple stereoscopic views. It is registered to a preoperatively acquired anatomical volume based on subject-specific priors. The endoscope motion is tracked based on the 3D scene and its field-of-view is visualised jointly with the preoperative information. Based on in vivo data, this paper demonstrates how the proposed navigation system provides intraoperative navigation for TEM1.
- Published
- 2014
- Full Text
- View/download PDF
49. Practical intraoperative stereo camera calibration.
- Author
-
Pratt P, Bergeles C, Darzi A, and Yang GZ
- Subjects
- Calibration, Equipment Design, Equipment Failure Analysis, Image Enhancement instrumentation, Image Enhancement methods, Reproducibility of Results, Sensitivity and Specificity, United Kingdom, Algorithms, Endoscopes, Image Interpretation, Computer-Assisted instrumentation, Image Interpretation, Computer-Assisted methods, Imaging, Three-Dimensional instrumentation, Imaging, Three-Dimensional methods
- Abstract
Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.
- Published
- 2014
- Full Text
- View/download PDF
50. Mobility experiments with microrobots for minimally invasive intraocular surgery.
- Author
-
Ullrich F, Bergeles C, Pokki J, Ergeneman O, Erni S, Chatzipirpiridis G, Pané S, Framme C, and Nelson BJ
- Subjects
- Animals, Device Removal instrumentation, Device Removal methods, Equipment Design, Eye Diseases surgery, Female, Humans, Intraocular Pressure, Intravitreal Injections, Magnetics instrumentation, Magnets, Microsurgery instrumentation, Minimally Invasive Surgical Procedures instrumentation, Models, Animal, Ophthalmologic Surgical Procedures instrumentation, Rabbits, Robotics instrumentation, Swine, Vitreous Body surgery, Wireless Technology instrumentation, Magnetics methods, Microsurgery methods, Minimally Invasive Surgical Procedures methods, Ophthalmologic Surgical Procedures methods, Robotics methods
- Abstract
Purpose: To investigate microrobots as an assistive tool for minimally invasive intraocular surgery and to demonstrate mobility and controllability inside the living rabbit eye., Methods: A system for wireless magnetic control of untethered microrobots was developed. Mobility and controllability of a microrobot are examined in different media, specifically vitreous, balanced salt solution (BSS), and silicone oil. This is demonstrated through ex vivo and in vivo animal experiments., Results: The developed electromagnetic system enables precise control of magnetic microrobots over a workspace that covers the posterior eye segment. The system allows for rotation and translation of the microrobot in different media (vitreous, BSS, silicone oil) inside the eye., Conclusions: Intravitreal introduction of untethered mobile microrobots can enable sutureless and precise ophthalmic procedures. Ex vivo and in vivo experiments demonstrate that microrobots can be manipulated inside the eye. Potential applications are targeted drug delivery for maculopathies such as AMD, intravenous deployment of anticoagulation agents for retinal vein occlusion (RVO), and mechanical applications, such as manipulation of epiretinal membrane peeling (ERM). The technology has the potential to reduce the invasiveness of ophthalmic surgery and assist in the treatment of a variety of ophthalmic diseases.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.