521 results on '"Siewerdsen, Jeffrey H."'
Search Results
2. CT in musculoskeletal imaging: still helpful and for what?
- Author
-
Carrino, John A., Ibad, Hamza, Lin, Yenpo, Ghotbi, Elena, Klein, Joshua, Demehri, Shadpour, Del Grande, Filippo, Bogner, Eric, Boesen, Mikael P., and Siewerdsen, Jeffrey H.
- Published
- 2024
- Full Text
- View/download PDF
3. Vessel-targeted compensation of deformable motion in interventional cone-beam CT
- Author
-
Lu, Alexander, Huang, Heyuan, Hu, Yicheng, Zbijewski, Wojciech, Unberath, Mathias, Siewerdsen, Jeffrey H., Weiss, Clifford R., and Sisniega, Alejandro
- Published
- 2024
- Full Text
- View/download PDF
4. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis
- Author
-
Huang, Yixuan, Zhang, Xiaoxuan, Hu, Yicheng, Johnston, Ashley R., Jones, Craig K., Zbijewski, Wojciech B., Siewerdsen, Jeffrey H., Helm, Patrick A., Witham, Timothy F., and Uneri, Ali
- Published
- 2024
- Full Text
- View/download PDF
5. Synthetic PET from CT improves diagnosis and prognosis for lung cancer: Proof of concept
- Author
-
Salehjahromi, Morteza, Karpinets, Tatiana V., Sujit, Sheeba J., Qayati, Mohamed, Chen, Pingjun, Aminu, Muhammad, Saad, Maliazurina B., Bandyopadhyay, Rukhmini, Hong, Lingzhi, Sheshadri, Ajay, Lin, Julie, Antonoff, Mara B., Sepesi, Boris, Ostrin, Edwin J., Toumazis, Iakovos, Huang, Peng, Cheng, Chao, Cascone, Tina, Vokes, Natalie I., Behrens, Carmen, Siewerdsen, Jeffrey H., Hazle, John D., Chang, Joe Y., Zhang, Jianhua, Lu, Yang, Godoy, Myrna C.B., Chung, Caroline, Jaffray, David, Wistuba, Ignacio, Lee, J. Jack, Vaporciyan, Ara A., Gibbons, Don L., Gladish, Gregory, Heymach, John V., Wu, Carol C., Zhang, Jianjun, and Wu, Jia
- Published
- 2024
- Full Text
- View/download PDF
6. Operational Ontology for Oncology (O3): A Professional Society-Based, Multistakeholder, Consensus-Driven Informatics Standard Supporting Clinical and Research Use of Real-World Data From Patients Treated for Cancer
- Author
-
Mayo, Charles S., Feng, Mary U., Brock, Kristy K., Kudner, Randi, Balter, Peter, Buchsbaum, Jeffrey C., Caissie, Amanda, Covington, Elizabeth, Daugherty, Emily C., Dekker, Andre L., Fuller, Clifton D., Hallstrom, Anneka L., Hong, David S., Hong, Julian C., Kamran, Sophia C., Katsoulakis, Eva, Kildea, John, Krauze, Andra V., Kruse, Jon J., McNutt, Tod, Mierzwa, Michelle, Moreno, Amy, Palta, Jatinder R., Popple, Richard, Purdie, Thomas G., Richardson, Susan, Sharp, Gregory C., Satomi, Shiraishi, Tarbox, Lawrence R., Venkatesan, Aradhana M., Witztum, Alon, Woods, Kelly E., Yao, Yuan, Farahani, Keyvan, Aneja, Sanjay, Gabriel, Peter E., Hadjiiski, Lubomire, Ruan, Dan, Siewerdsen, Jeffrey H., Bratt, Steven, Casagni, Michelle, Chen, Su, Christodouleas, John C., DiDonato, Anthony, Hayman, James, Kapoor, Rishhab, Kravitz, Saul, Sebastian, Sharon, Von Siebenthal, Martin, Bosch, Walter, Hurkmans, Coen, Yom, Sue S., and Xiao, Ying
- Published
- 2023
- Full Text
- View/download PDF
7. Quantification of manipulation forces needed for robot-assisted reduction of the ankle syndesmosis: an initial cadaveric study
- Author
-
Gebremeskel, Mikias, Shafiq, Babar, Uneri, Ali, Sheth, Niral, Simmerer, Corey, Zbijewski, Wojciech, Siewerdsen, Jeffrey H., Cleary, Kevin, and Li, Gang
- Published
- 2022
- Full Text
- View/download PDF
8. Detection of fibular rotational changes in cone beam CT: experimental study in a specimen model
- Author
-
Beisemann, Nils, Tilk, Antonella M., Gierse, Jula, Grützner, Paul A., Franke, Jochen, Siewerdsen, Jeffrey H., and Vetter, Sven Y.
- Published
- 2022
- Full Text
- View/download PDF
9. Computed Tomography: State-of-the-Art Advancements in Musculoskeletal Imaging
- Author
-
Ibad, Hamza Ahmed, de Cesar Netto, Cesar, Shakoor, Delaram, Sisniega, Alejandro, Liu, Stephen Z., Siewerdsen, Jeffrey H., Carrino, John A., Zbijewski, Wojciech, and Demehri, Shadpour
- Published
- 2022
- Full Text
- View/download PDF
10. Automatic vessel attenuation measurement for quality control of contrast‐enhanced CT: Validation on the portal vein.
- Author
-
McCoy, Kevin, Marisetty, Sujay, Tan, Dominique, Jensen, Corey T., Siewerdsen, Jeffrey H., Peterson, Christine B., and Ahmad, Moiz
- Subjects
IMAGE intensifiers ,COMPUTED tomography ,RANDOM forest algorithms ,QUALITY control ,BLOOD vessels - Abstract
Background: Adequate image enhancement of organs and blood vessels of interest is an important aspect of image quality in contrast‐enhanced computed tomography (CT). There is a need for an objective method for evaluation of vessel contrast that can be automatically and systematically applied to large sets of CT exams. Purpose: The purpose of this work was to develop a method to automatically segment and measure attenuation Hounsfield Unit (HU) in the portal vein (PV) in contrast‐enhanced abdomen CT examinations. Methods: Input CT images were processed by a vessel enhancing filter to determine candidate PV segmentations. Multiple machine learning (ML) classifiers were evaluated for classifying a segmentation as corresponding to the PV based on segmentation shape, location, and intensity features. A public data set of 82 contrast‐enhanced abdomen CT examinations was used to train the method. An optimal ML classifier was selected by training and tuning on 66 out of the 82 exams (80% training split) in the public data set. The method was evaluated in terms of segmentation classification accuracy and PV attenuation measurement accuracy, compared to manually determined ground truth, on a test set of the remaining 16 exams (20% test split) held out from public data set. The method was further evaluated on a separate, independently collected test set of 21 examinations. Results: The best classifier was found to be a random forest, with a precision of 0.892 in the held‐out test set to correctly identify the PV from among the input candidate segmentations. The mean absolute error of the measured PV attenuation relative to ground truth manual measurement was 13.4 HU. On the independent test set, the overall precision decreased to 0.684. However, the PV attenuation measurement remained relatively accurate with a mean absolute error of 15.2 HU. Conclusions: The method was shown to accurately measure PV attenuation over a large range of attenuation values, and was validated in an independently collected dataset. The method did not require time‐consuming manual contouring to supervise training. The method may be applied to systematic quality control of contrast‐enhanced CT examinations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Deformable motion compensation in interventional cone‐beam CT with a context‐aware learned autofocus metric.
- Author
-
Huang, Heyuan, Liu, Yixuan, Siewerdsen, Jeffrey H., Lu, Alexander, Hu, Yicheng, Zbijewski, Wojciech, Unberath, Mathias, Weiss, Clifford R., and Sisniega, Alejandro
- Subjects
CONE beam computed tomography ,CONVOLUTIONAL neural networks ,CROSS correlation ,SPATIAL resolution ,MOTION capture (Human mechanics) ,DEFORMATION of surfaces - Abstract
Purpose: Interventional Cone‐Beam CT (CBCT) offers 3D visualization of soft‐tissue and vascular anatomy, enabling 3D guidance of abdominal interventions. However, its long acquisition time makes CBCT susceptible to patient motion. Image‐based autofocus offers a suitable platform for compensation of deformable motion in CBCT, but it relies on handcrafted motion metrics based on first‐order image properties and that lack awareness of the underlying anatomy. This work proposes a data‐driven approach to motion quantification via a learned, context‐aware, deformable metric, VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$, that quantifies the amount of motion degradation as well as the realism of the structural anatomical content in the image. Methods: The proposed VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$ was modeled as a deep convolutional neural network (CNN) trained to recreate a reference‐based structural similarity metric—visual information fidelity (VIF). The deep CNN acted on motion‐corrupted images, providing an estimation of the spatial VIF map that would be obtained against a motion‐free reference, capturing motion distortion, and anatomic plausibility. The deep CNN featured a multi‐branch architecture with a high‐resolution branch for estimation of voxel‐wise VIF on a small volume of interest. A second contextual, low‐resolution branch provided features associated to anatomical context for disentanglement of motion effects and anatomical appearance. The deep CNN was trained on paired motion‐free and motion‐corrupted data obtained with a high‐fidelity forward projection model for a protocol involving 120 kV and 9.90 mGy. The performance of VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$ was evaluated via metrics of correlation with ground truth VIF${\bm{VIF}}$ and with the underlying deformable motion field in simulated data with deformable motion fields with amplitude ranging from 5 to 20 mm and frequency from 2.4 up to 4 cycles/scan. Robustness to variation in tissue contrast and noise levels was assessed in simulation studies with varying beam energy (90–120 kV) and dose (1.19–39.59 mGy). Further validation was obtained on experimental studies with a deformable phantom. Final validation was obtained via integration of VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$ on an autofocus compensation framework, applied to motion compensation on experimental datasets and evaluated via metric of spatial resolution on soft‐tissue boundaries and sharpness of contrast‐enhanced vascularity. Results: The magnitude and spatial map of VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$ showed consistent and high correlation levels with the ground truth in both simulation and real data, yielding average normalized cross correlation (NCC) values of 0.95 and 0.88, respectively. Similarly, VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$ achieved good correlation values with the underlying motion field, with average NCC of 0.90. In experimental phantom studies, VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$ properly reflects the change in motion amplitudes and frequencies: voxel‐wise averaging of the local VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$ across the full reconstructed volume yielded an average value of 0.69 for the case with mild motion (2 mm, 12 cycles/scan) and 0.29 for the case with severe motion (12 mm, 6 cycles/scan). Autofocus motion compensation using VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$ resulted in noticeable mitigation of motion artifacts and improved spatial resolution of soft tissue and high‐contrast structures, resulting in reduction of edge spread function width of 8.78% and 9.20%, respectively. Motion compensation also increased the conspicuity of contrast‐enhanced vascularity, reflected in an increase of 9.64% in vessel sharpness. Conclusion: The proposed VIFDL${\bm{VI}}{{\bm{F}}}_{DL}$, featuring a novel context‐aware architecture, demonstrated its capacity as a reference‐free surrogate of structural similarity to quantify motion‐induced degradation of image quality and anatomical plausibility of image content. The validation studies showed robust performance across motion patterns, x‐ray techniques, and anatomical instances. The proposed anatomy‐ and context‐aware metric poses a powerful alternative to conventional motion estimation metrics, and a step forward for application of deep autofocus motion compensation for guidance in clinical interventional procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Objective image quality assurance in cone‐beam CT: Test methods, analysis, and workflow in longitudinal studies.
- Author
-
Johnston, Ashley, Mahesh, Mahadevappa, Uneri, Ali, Rypinski, Tatiana A., Boone, John M., and Siewerdsen, Jeffrey H.
- Subjects
CONE beam computed tomography ,QUALITY control charts ,QUALITY assurance ,TEST methods ,LONGITUDINAL method ,INDUSTRIAL engineering ,TRANSFER functions - Abstract
Background: Standards for image quality evaluation in multi‐detector CT (MDCT) and cone‐beam CT (CBCT) are evolving to keep pace with technological advances. A clear need is emerging for methods that facilitate rigorous quality assurance (QA) with up‐to‐date metrology and streamlined workflow suitable to a range of MDCT and CBCT systems. Purpose: To evaluate the feasibility and workflow associated with image quality (IQ) assessment in longitudinal studies for MDCT and CBCT with a single test phantom and semiautomated analysis of objective, quantitative IQ metrology. Methods: A test phantom (CorgiTM Phantom, The Phantom Lab, Greenwich, New York, USA) was used in monthly IQ testing over the course of 1 year for three MDCT scanners (one of which presented helical and volumetric scan modes) and four CBCT scanners. Semiautomated software analyzed image uniformity, linearity, contrast, noise, contrast‐to‐noise ratio (CNR), 3D noise‐power spectrum (NPS), modulation transfer function (MTF) in axial and oblique directions, and cone‐beam artifact magnitude. The workflow was evaluated using methods adapted from systems/industrial engineering, including value stream process modeling (VSPM), standard work layout (SWL), and standard work control charts (SWCT) to quantify and optimize test methodology in routine practice. The completeness and consistency of DICOM data from each system was also evaluated. Results: Quantitative IQ metrology provided valuable insight in longitudinal quality assurance (QA), with metrics such as NPS and MTF providing insight on root cause for various forms of system failure—for example, detector calibration and geometric calibration. Monthly constancy testing showed variations in IQ test metrics owing to system performance as well as phantom setup and provided initial estimates of upper and lower control limits appropriate to QA action levels. Rigorous evaluation of QA workflow identified methods to reduce total cycle time to ∼10 min for each system—viz., use of a single phantom configuration appropriate to all scanners and Head or Body scan protocols. Numerous gaps in the completeness and consistency of DICOM data were observed for CBCT systems. Conclusion: An IQ phantom and test methodology was found to be suitable to QA of MDCT and CBCT systems with streamlined workflow appropriate to busy clinical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Motion compensation in extremity cone-beam computed tomography
- Author
-
Sisniega, Alejandro, Thawait, Gaurav K., Shakoor, Delaram, Siewerdsen, Jeffrey H., Demehri, Shadpour, and Zbijewski, Wojciech
- Published
- 2019
- Full Text
- View/download PDF
14. Feasibility of bone marrow edema detection using dual‐energy cone‐beam computed tomography.
- Author
-
Liu, Stephen Z., Herbst, Magdalena, Schaefer, Jamin, Weber, Thomas, Vogt, Sebastian, Ritschl, Ludwig, Kappler, Steffen, Kawcak, Christopher E., Stewart, Holly L., Siewerdsen, Jeffrey H., and Zbijewski, Wojciech
- Subjects
CONE beam computed tomography ,BONE marrow ,WRIST ,MOLECULAR beams ,BONE health ,RECEIVER operating characteristic curves ,EDEMA - Abstract
Background: Dual‐energy (DE) detection of bone marrow edema (BME) would be a valuable new diagnostic capability for the emerging orthopedic cone‐beam computed tomography (CBCT) systems. However, this imaging task is inherently challenging because of the narrow energy separation between water (edematous fluid) and fat (health yellow marrow), requiring precise artifact correction and dedicated material decomposition approaches. Purpose: We investigate the feasibility of BME assessment using kV‐switching DE CBCT with a comprehensive CBCT artifact correction framework and a two‐stage projection‐ and image‐domain three‐material decomposition algorithm. Methods: DE CBCT projections of quantitative BME phantoms (water containers 100–165 mm in size with inserts presenting various degrees of edema) and an animal cadaver model of BME were acquired on a CBCT test bench emulating the standard wrist imaging configuration of a Multitom Rax twin robotic x‐ray system. The slow kV‐switching scan protocol involved a 60 kV low energy (LE) beam and a 120 kV high energy (HE) beam switched every 0.5° over a 200° angular span. The DE CBCT data preprocessing and artifact correction framework consisted of (i) projection interpolation onto matched LE and HE projections views, (ii) lag and glare deconvolutions, and (iii) efficient Monte Carlo (MC)‐based scatter correction. Virtual non‐calcium (VNCa) images for BME detection were then generated by projection‐domain decomposition into an Aluminium (Al) and polyethylene basis set (to remove beam hardening) followed by three‐material image‐domain decomposition into water, Ca, and fat. Feasibility of BME detection was quantified in terms of VNCa image contrast and receiver operating characteristic (ROC) curves. Robustness to object size, position in the field of view (FOV) and beam collimation (varied 20–160 mm) was investigated. Results: The MC‐based scatter correction delivered > 69% reduction of cupping artifacts for moderate to wide collimations (> 80 mm beam width), which was essential to achieve accurate DE material decomposition. In a forearm‐sized object, a 20% increase in water concentration (edema) of a trabecular bone‐mimicking mixture presented as ∼15 HU VNCa contrast using 80–160 mm beam collimations. The variability with respect to object position in the FOV was modest (< 15% coefficient of variation). The areas under the ROC curve were > 0.9. A femur‐sized object presented a somewhat more challenging task, resulting in increased sensitivity to object positioning at 160 mm collimation. In animal cadaver specimens, areas of VNCa enhancement consistent with BME were observed in DE CBCT images in regions of MRI‐confirmed edema. Conclusion: Our results indicate that the proposed artifact correction and material decomposition pipeline can overcome the challenges of scatter and limited spectral separation to achieve relatively accurate and sensitive BME detection in DE CBCT. This study provides an important baseline for clinical translation of musculoskeletal DE CBCT to quantitative, point‐of‐care bone health assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Clinical Translation of the LevelCheck Decision Support Algorithm for Target Localization in Spine Surgery
- Author
-
Manbachi, Amir, De Silva, Tharindu, Uneri, Ali, Jacobson, Matthew, Goerres, Joseph, Ketcha, Michael, Han, Runze, Aygun, Nafi, Thompson, David, Ye, Xiaobu, Vogt, Sebastian, Kleinszig, Gerhard, Molina, Camilo, Iyer, Rajiv, Garzon-Muvdi, Tomas, Raber, Michael R., Groves, Mari, Wolinsky, Jean-Paul, and Siewerdsen, Jeffrey H.
- Published
- 2018
- Full Text
- View/download PDF
16. Cone‐beam computed tomography produces images of numerically comparable diagnostic quality for bone and inferior quality for soft tissues compared with fan‐beam computed tomography in cadaveric equine metacarpophalangeal joints.
- Author
-
Stewart, Holly L., Siewerdsen, Jeffrey H., Selberg, Kurt T., Bills, Kathryn W., and Kawcak, Christopher E.
- Abstract
Cone‐beam computed tomography (CBCT) is an emerging modality for imaging of the equine patient. The objective of this prospective, descriptive, exploratory study was to assess visualization tasks using CBCT compared with conventional fan‐beam CT (FBCT) for imaging of the metacarpophalangeal joint in equine cadavers. Satisfaction scores were numerically excellent with both CBCT and FBCT for bone evaluation, and FBCT was numerically superior for soft tissue evaluation. Preference tests indicated FBCT was numerically superior for soft tissue evaluation, while preference test scoring for bone was observer‐dependent. Findings from this study can be used as background for future studies evaluating CBCT image quality in live horses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. A Self‐Configuring Deep Learning Network for Segmentation of Temporal Bone Anatomy in Cone‐Beam CT Imaging.
- Author
-
Ding, Andy S., Lu, Alexander, Li, Zhaoshuo, Sahu, Manish, Galaiya, Deepa, Siewerdsen, Jeffrey H., Unberath, Mathias, Taylor, Russell H., and Creighton, Francis X.
- Abstract
Objective: Preoperative planning for otologic or neurotologic procedures often requires manual segmentation of relevant structures, which can be tedious and time‐consuming. Automated methods for segmenting multiple geometrically complex structures can not only streamline preoperative planning but also augment minimally invasive and/or robot‐assisted procedures in this space. This study evaluates a state‐of‐the‐art deep learning pipeline for semantic segmentation of temporal bone anatomy. Study Design: A descriptive study of a segmentation network. Setting: Academic institution. Methods: A total of 15 high‐resolution cone‐beam temporal bone computed tomography (CT) data sets were included in this study. All images were co‐registered, with relevant anatomical structures (eg, ossicles, inner ear, facial nerve, chorda tympani, bony labyrinth) manually segmented. Predicted segmentations from no new U‐Net (nnU‐Net), an open‐source 3‐dimensional semantic segmentation neural network, were compared against ground‐truth segmentations using modified Hausdorff distances (mHD) and Dice scores. Results: Fivefold cross‐validation with nnU‐Net between predicted and ground‐truth labels were as follows: malleus (mHD: 0.044 ± 0.024 mm, dice: 0.914 ± 0.035), incus (mHD: 0.051 ± 0.027 mm, dice: 0.916 ± 0.034), stapes (mHD: 0.147 ± 0.113 mm, dice: 0.560 ± 0.106), bony labyrinth (mHD: 0.038 ± 0.031 mm, dice: 0.952 ± 0.017), and facial nerve (mHD: 0.139 ± 0.072 mm, dice: 0.862 ± 0.039). Comparison against atlas‐based segmentation propagation showed significantly higher Dice scores for all structures (p <.05). Conclusion: Using an open‐source deep learning pipeline, we demonstrate consistently submillimeter accuracy for semantic CT segmentation of temporal bone anatomy compared to hand‐segmented labels. This pipeline has the potential to greatly improve preoperative planning workflows for a variety of otologic and neurotologic procedures and augment existing image guidance and robot‐assisted systems for the temporal bone. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Flexible Adult Acquired Flatfoot Deformity: Comparison Between Weight-Bearing and Non-Weight-Bearing Measurements Using Cone-Beam Computed Tomography
- Author
-
de Cesar Netto, Cesar, Schon, Lew C., Thawait, Gaurav K., da Fonseca, Lucas Furtado, Chinanuvathana, Apisan, Zbijewski, Wojciech B., Siewerdsen, Jeffrey H., and Demehri, Shadpour
- Published
- 2017
- Full Text
- View/download PDF
19. Image quality and dose for a multisource cone‐beam CT extremity scanner
- Author
-
Gang, Grace J., Zbijewski, Wojciech, Mahesh, Mahadevappa, Thawait, Gaurav, Packard, Nathan, Yorkston, John, Demehri, Shadpour, and Siewerdsen, Jeffrey H.
- Published
- 2018
- Full Text
- View/download PDF
20. Modeling and evaluation of a high‐resolution CMOS detector for cone‐beam CT of the extremities
- Author
-
Cao, Qian, Sisniega, Alejandro, Brehler, Michael, Stayman, J. Webster, Yorkston, John, Siewerdsen, Jeffrey H., and Zbijewski, Wojciech
- Published
- 2018
- Full Text
- View/download PDF
21. Customized External Cranioplasty for Management of Syndrome of Trephined in Nonsurgical Candidates.
- Author
-
Ghinda, Cristina D., Stewart, Ryan, Totis, Francesca, Siewerdsen, Jeffrey H., and Anderson, William S.
- Published
- 2023
- Full Text
- View/download PDF
22. Automatic Artery/Vein Classification in 2D-DSA Images of Stroke Patients
- Author
-
Van Asperen, Vivian, Van Den Berg, Josefien, Lycklama, Fleur, Marting, Victoria, Cornelissen, Sandra, Van Zwam, Wim H., Hofmeijer, Jeanette, Van Der Lugt, Aad, Van Walsum, Theo, Van Der Sluijs, Matthijs, Su, Ruisheng, Linte, Cristian A., Siewerdsen, Jeffrey H., Radiology & Nuclear Medicine, TechMed Centre, and Clinical Neurophysiology
- Subjects
Stroke ,22/3 OA procedure ,Vessels ,Artery-vein classification ,DSA - Abstract
To develop an objective system for perfusion assessment in digital subtraction angiography (DSA), artery-vein (A/V) classification is essential. In this study, an automated A/V classification system in 2D DSA images of stroke patients is proposed. After preprocessing through vessel segmentation with a Frangi fitler and Gaussian smoothing, a time-intensity curve (TIC) of each vessel pixel was extracted and relevant parameters were calculated. Different combinations of input parameters were systematically tested to come to the optimal set of input parameters. The parameters formed the input for k-means (KM) and fuzzy c-means (FCM) clustering. Both algorithms were tested for clustering into 2 to 7 clusters. Cluster labeling was performed based on the average time to peak (TTP) of a cluster. A reference standard consisted of manually annotated DSA images of the MR CLEAN registry. Outcome measures were accuracy, true artery rate (TAR) and true vein rate (TVR). The optimal value for k was found to be 2 for both KM and FCM clustering. The optimal parameter set was: variance, standard deviation, maximal slope, peak width, time to peak, arrival time, maximal intensity and area under the TIC. No significant difference was found between FCM and KM clustering and. Both FCM and KM clustering yielded an average accuracy of 76%, average TAR of 74% and average TVR of 80%.
- Published
- 2022
- Full Text
- View/download PDF
23. Towards Real-time 6D Pose Estimation of Objects in Single-view Cone-beam X-ray
- Author
-
Viviers, Christiaan G.A., de Bruijn, Joël, de With, Peter H.N., van der Sommen, Fons, Filatova, Lena, Linte, Cristian A., Siewerdsen, Jeffrey H., Video Coding & Architectures, EAISI Health, Center for Care & Cure Technology Eindhoven, and Eindhoven MedTech Innovation Center
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,deep learning ,6D pose estimation ,object detection ,X-ray projection model ,Machine Learning (cs.LG) - Abstract
Deep learning-based pose estimation algorithms can successfully estimate the pose of objects in an image, especially in the field of color images. 6D Object pose estimation based on deep learning models for X-ray images often use custom architectures that employ extensive CAD models and simulated data for training purposes. Recent RGB-based methods opt to solve pose estimation problems using small datasets, making them more attractive for the X-ray domain where medical data is scarcely available. We refine an existing RGB-based model (SingleShotPose) to estimate the 6D pose of a marked cube from grayscale X-ray images by creating a generic solution trained on only real X-ray data and adjusted for X-ray acquisition geometry. The model regresses 2D control points and calculates the pose through 2D/3D correspondences using Perspective-n-Point(PnP), allowing a single trained model to be used across all supporting cone-beam-based X-ray geometries. Since modern X-ray systems continuously adjust acquisition parameters during a procedure, it is essential for such a pose estimation network to consider these parameters in order to be deployed successfully and find a real use case. With a 5-cm/5-degree accuracy of 93% and an average 3D rotation error of 2.2 degrees, the results of the proposed approach are comparable with state-of-the-art alternatives, while requiring significantly less real training examples and being applicable in real-time applications., Published at SPIE Medical Imaging 2022
- Published
- 2022
24. Tissue segmentation for workflow recognition in open inguinal hernia repair training
- Author
-
Klosa, Elizabeth, Hisey, Rebecca, Nazari, Tahmina, Wiggers, Theo, Zevin, Boris, Ungi, Tamas, Fichtinger, Gabor, Linte, Cristian A., Siewerdsen, Jeffrey H., and Surgery
- Abstract
PURPOSE: As medical education adopts a competency-based training method, experts are spending substantial amounts of time instructing and assessing trainees' competence. In this study, we look to develop a computer-assisted training platform that can provide instruction and assessment of open inguinal hernia repairs without needing an expert observer. We recognize workflow tasks based on the tool-tissue interactions, suggesting that we first need a method to identify tissues. This study aims to train a neural network in identifying tissues in a low-cost phantom as we work towards identifying the tool-tissue interactions needed for task recognition. METHODS: Eight simulated tissues were segmented throughout five videos from experienced surgeons who performed open inguinal hernia repairs on phantoms. A U-Net was trained using leave-one-user-out cross validation. The average F-score, false positive rate and false negative rate were calculated for each tissue to evaluate the U-Net's performance. RESULTS: Higher F-scores and lower false negative and positive rates were recorded for the skin, hernia sac, spermatic cord, and nerves, while slightly lower metrics were recorded for the subcutaneous tissue, Scarpa's fascia, external oblique aponeurosis and superficial epigastric vessels. CONCLUSION: The U-Net performed better in recognizing tissues that were relatively larger in size and more prevalent, while struggling to recognize smaller tissues only briefly visible. Since workflow recognition does not require perfect segmentation, we believe our U-Net is sufficient in recognizing the tissues of an inguinal hernia repair phantom. Future studies will explore combining our segmentation U-Net with tool detection as we work towards workflow recognition.
- Published
- 2022
- Full Text
- View/download PDF
25. Image quality models for 2D and 3D x‐ray imaging systems: A perspective vignette.
- Author
-
Siewerdsen, Jeffrey H.
- Subjects
- *
THREE-dimensional imaging , *IMAGING systems , *X-ray imaging , *CONE beam computed tomography , *SYSTEM analysis , *IMAGE analysis , *PHOTON counting - Abstract
Image quality models based on cascaded systems analysis and task‐based imaging performance were an important aspect of the emergence of 2D and 3D digital x‐ray systems over the last 25 years. This perspective vignette offers cursory review of such developments and personal insights that may not be obvious within previously published scientific literature. The vignette traces such models to the mid‐1990s, when flat‐panel x‐ray detectors were emerging as a new base technology for digital radiography and benefited from the rigorous, objective characterization of imaging performance gained from such models. The connection of models for spatial resolution and noise to spatial‐frequency‐dependent descriptors of imaging task provided a useful framework for system optimization that helped to accelerate the development of new technologies to first clinical use. Extension of the models to new technologies and applications is also described, including dual‐energy imaging, photon‐counting detectors, phase contrast imaging, tomosynthesis, cone‐beam CT, 3D image reconstruction, and image registration. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Combining physics‐based models with deep learning image synthesis and uncertainty in intraoperative cone‐beam CT of the brain.
- Author
-
Zhang, Xiaoxuan, Sisniega, Alejandro, Zbijewski, Wojciech B., Lee, Junghoon, Jones, Craig K., Wu, Pengwei, Han, Runze, Uneri, Ali, Vagdargi, Prasad, Helm, Patrick A., Luciano, Mark, Anderson, William S., and Siewerdsen, Jeffrey H.
- Subjects
IMAGE reconstruction algorithms ,CONE beam computed tomography ,DEEP learning ,GENERATIVE adversarial networks ,EPISTEMIC uncertainty ,COMPUTED tomography - Abstract
Background: Image‐guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. Purpose: To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL‐Recon) was proposed for improved intraoperative cone‐beam CT (CBCT) image quality. Methods: The DL‐Recon framework combines physics‐based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT‐to‐CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL‐Recon image combines the synthetic CT with an artifact‐corrected filtered back‐projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL‐Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL‐Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning‐ and physics‐based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL‐Recon in clinical data. Results: CBCT images reconstructed via FBP with physics‐based corrections exhibited the usual challenges to soft‐tissue contrast resolution due to image non‐uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft‐tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL‐Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%–22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. Conclusions: DL‐Recon leveraged uncertainty estimation to combine the strengths of DL and physics‐based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft‐tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image‐guided neurosurgery. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Image based registration between full x-ray and spot mammograms for x-ray guided stereotactic breast biopsy
- Author
-
Said, Sarah, Clauser, Paola, Ruiter, Nicole, Baltzer, Pascal, Hopp, Torsten, Linte, Cristian A., and Siewerdsen, Jeffrey H.
- Subjects
ddc:620 ,Engineering & allied operations - Published
- 2022
- Full Text
- View/download PDF
28. Segmentation of the mouse skull for MRI guided transcranial focused ultrasound therapy planning
- Author
-
Hopp, Torsten, Springer, Luca, Gross, Carl, Grudzenski-Theis, Saskia, Mathis-Ullrich, Franziska, Ruiter, Nicole, Linte, Cristian A., and Siewerdsen, Jeffrey H.
- Subjects
DATA processing & computer science ,ddc:004 - Abstract
For opening the blood brain barrier using focused ultrasound (FUS) to treat neurodegenerative diseases, mouse- specific therapy planning is an essential step. For our therapy planning approach based on acoustic simulations we here propose to automatically segment the mouse skull and brain from magnetic resonance imaging, which is usually used in combination with FUS for monitoring purposes. The proposed method consists of (1) pre- processing to enhance the image contrast and remove noise, (2) a rough skull segmentation using morphological operations and adaptive binarization, (3) segmentation of the brain using the established 3D-PCNN method, (4) correction of the skull segmentation using the anatomical information about the brain location and (5) a post-processing to remove obvious errors from the final skull segmentation. The method is evaluated with four in-vivo datasets obtained with different parameters. The median Matthews Correlation Coefficient (MCC) on all slices of four datasets was 0.85 for the brain segmentation, 0.69 for the overall skull segmentation and 0.78 for the skull cap. Finally for showcasing the application an acoustic simulation based on the segmentation is presented, which results in a comparable prediction of the pressure field prediction as our earlier method based on micro-CT, and lines up well with literature estimations of the ultrasound attenuation.
- Published
- 2022
- Full Text
- View/download PDF
29. Cone-beam CT with a flat-panel detector: From image science to image-guided surgery
- Author
-
Siewerdsen, Jeffrey H.
- Published
- 2011
- Full Text
- View/download PDF
30. Computed Tomography: State-of-the-Art Advancements in Musculoskeletal Imaging.
- Author
-
Ibad, Hamza Ahmed, de Cesar Netto, Cesar, Shakoor, Delaram, Sisniega, Alejandro, Liu, Stephen Z., Siewerdsen, Jeffrey H., Carrino, John A., Zbijewski, Wojciech, and Demehri, Shadpour
- Published
- 2023
- Full Text
- View/download PDF
31. Diagnostic Performance of a Prototype Dual-Energy Chest Imaging System: ROC Analysis
- Author
-
Kashani, Hany, Varon, Carlos A., Paul, Narinder S., Gang, Grace J., Van Metter, Rich, Yorkston, John, and Siewerdsen, Jeffrey H.
- Published
- 2010
- Full Text
- View/download PDF
32. Intraoperative image-guided transoral robotic surgery: pre-clinical studies
- Author
-
Liu, Wen P., Reaugamornrat, Sureerat, Sorger, Jonathan M., Siewerdsen, Jeffrey H., Taylor, Russell H., and Richmon, Jeremy D.
- Published
- 2015
- Full Text
- View/download PDF
33. Development of a High-performance Dual-energy Chest Imaging System: Initial Investigation of Diagnostic Performance
- Author
-
Kashani, Hany, Gang, Jianan Grace, Shkumat, Nicholas A., Varon, Carlos A., Yorkston, John, Van Metter, Richard, Paul, Narinder S., and Siewerdsen, Jeffrey H.
- Published
- 2009
- Full Text
- View/download PDF
34. Effect of subject motion and gantry rotation speed on image quality and dose delivery in CT‐guided radiotherapy.
- Author
-
Hrinivich, William T., Chernavsky, Nicole E., Morcos, Marc, Li, Taoran, Wu, Pengwei, Wong, John, and Siewerdsen, Jeffrey H.
- Subjects
DOSE-response relationship (Radiation) ,ROTATIONAL motion ,IMAGE-guided radiation therapy ,CONE beam computed tomography ,COMPUTED tomography ,RADIOTHERAPY ,MOTION - Abstract
Purpose: To investigate the effects of subject motion and gantry rotation speed on computed tomography (CT) image quality over a range of image acquisition speeds for fan‐beam (FB) and cone‐beam (CB) CT scanners, and quantify the geometric and dosimetric errors introduced by FB and CB sampling in the context of adaptive radiotherapy. Methods: Images of motion phantoms were acquired using four CT scanners with gantry rotation speeds of 0.5 s/rotation (denoted FB‐0.5), 1.9 s/rotation (FB‐1.9), 16.6 s/rotation (CB‐16.6), and 60.0 s/rotation (CB‐60.0). A phantom presenting various tissue densities undergoing motion with 4‐s period and ranging in amplitude from ±0.5 to ±10.0 mm was used to characterize motion artifacts (streaks), motion blur (edge‐spread function, ESF), and geometric inaccuracy (excursion of insert centroids and distortion of known shape). An anthropomorphic abdomen phantom undergoing ±2.5‐mm motion with 4‐s period was used to simulate an adaptive radiotherapy workflow, and relative geometric and dosimetric errors were compared between scanners. Results: At ±2.5‐mm motion, phantom measurements demonstrated mean ± SD ESF widths of 0.6 ± 0.0, 1.3 ± 0.4, 2.0 ± 1.1, and 2.9 ± 2.0 mm and geometric inaccuracy (excursion) of 2.7 ± 0.4, 4.1 ± 1.2, 2.6 ± 0.7, and 2.0 ± 0.5 mm for the FB‐0.5, FB‐1.9, CB‐16.6, and CB‐60.0 scanners, respectively. The results demonstrated nonmonotonic trends with scanner speed for FB and CB geometries. Geometric and dosimetric errors in adaptive radiotherapy plans were largest for the slowest (CB‐60.0) scanner and similar for the three faster systems (CB‐16.6, FB‐1.9, and FB‐0.5). Conclusions: Clinically standard CB‐60.0 demonstrates strong image quality degradation in the presence of subject motion, which is mitigated through faster CBCT or FBCT. Although motion blur is minimized for FB‐0.5 and FB‐1.9, such systems suffer from increased geometric distortion compared to CB‐16.6. Each system reflects tradeoffs in image artifacts and geometric inaccuracies that affect treatment delivery/dosimetric error and should be considered in the design of next‐generation CT‐guided radiotherapy systems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Automated Extraction of Anatomical Measurements From Temporal Bone CT Imaging.
- Author
-
Ding, Andy S., Lu, Alexander, Li, Zhaoshuo, Galaiya, Deepa, Ishii, Masaru, Siewerdsen, Jeffrey H., Taylor, Russell H., and Creighton, Francis X.
- Abstract
Objective: Proposed methods of minimally invasive and robot-assisted procedures within the temporal bone require measurements of surgically relevant distances and angles, which often require time-consuming manual segmentation of preoperative imaging. This study aims to describe an automatic segmentation and measurement extraction pipeline of temporal bone cone-beam computed tomography (CT) scans. Study Design: Descriptive study of temporal bone measurements. Setting: Academic institution. Methods: A propagation template composed of 16 temporal bone CT scans was formed with relevant anatomical structures and landmarks manually segmented. Next, 52 temporal bone CT scans were autonomously segmented using deformable registration techniques from the Advanced Normalization Tools Python package. Anatomical measurements were extracted via in-house Python algorithms. Extracted measurements were compared to ground truth values from manual segmentations. Results: Paired t test analyses showed no statistical difference between measurements using this pipeline and ground truth measurements from manually segmented images. Mean (SD) malleus manubrium length was 4.39 (0.34) mm. Mean (SD) incus short and long processes were 2.91 (0.18) mm and 3.53 (0.38) mm, respectively. The mean (SD) maximal diameter of the incus long process was 0.74 (0.17) mm. The first and second facial nerve genus had mean (SD) angles of 68.6 (6.7) degrees and 111.1 (5.3) degrees, respectively. The facial recess had a mean (SD) span of 3.21 (0.46) mm. Mean (SD) minimum distance between the external auditory canal and tegmen was 3.79 (1.05) mm. Conclusions: This is the first study to automatically extract relevant temporal bone anatomical measurements from CT scans using segmentation propagation. Measurements from these models can streamline preoperative planning, improve future segmentation techniques, and help develop future image-guided or robot-assisted systems for temporal bone procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Toward intraoperative image-guided transoral robotic surgery
- Author
-
Liu, Wen P., Reaugamornrat, Sureerat, Deguet, Anton, Sorger, Jonathan M., Siewerdsen, Jeffrey H., Richmon, Jeremy, and Taylor, Russell H.
- Published
- 2013
- Full Text
- View/download PDF
37. Hyperspectral imaging for tissue classification in glioblastoma tumor patients: a deep spectral-spatial approach
- Author
-
Manni, Francesca, Cai, Chuchen, van der Sommen, Fons, Zinger, Sveta, Shan, Caifeng, Edström, Erik, Elmi-Terander, Adrian, Fabelo, Himar, Ortega, Samuel, Marrero Callicó, Gustavo, de With, Peter H.N., Linte, Cristian A., Siewerdsen, Jeffrey H., Center for Care & Cure Technology Eindhoven, Eindhoven MedTech Innovation Center, Video Coding & Architectures, Electrical Engineering, Signal Processing Systems, Biomedical Diagnostics Lab, and EAISI Health
- Subjects
medicine.medical_specialty ,Hyperspectral imaging ,business.industry ,Tumor resection ,Context (language use) ,medicine.disease ,Gross Total Resection ,Malignant brain tumor ,Support vector machine ,Ant colony optimization ,Optical imaging ,Medicine ,Radiology ,Frozen tissue ,business ,3D-2D convolutional neural network (CNN) ,Glioblastoma - Abstract
Surgery is a crucial treatment for malignant brain tumors where gross total resection improves the prognosis. Tissue samples taken during surgery are either subject to a preliminary intraoperative histological analysis, or sent for a full pathological evaluation which can take days or weeks. Whereas a lengthy complete pathological analysis includes an array of techniques to be executed, a preliminary tissue analysis on frozen tissue is performed as quickly as possible (30-45 minutes on average) to provide fast feedback to the surgeon during the surgery. The surgeon uses the information to confirm that the resected tissue is indeed tumor and may, at least in theory, initiate repeated biopsies to help achieve gross total resection. However, due to the total turn-around time of the tissue inspection for repeated analyses, this approach may not be feasible during a single surgery. In this context, intraoperative image-guided techniques can improve the clinical workflow for tumor resection and improve outcome by aiding in the identification and removal of the malignant lesion. Hyperspectral imaging (HSI) is an optical imaging technique with the potential to extract combined spectral-spatial information. By exploiting HSI for human brain-tissue classification in 13 in-vivo hyperspectral images from 9 patients, a brain-tissue classifier is developed. The framework consists of a hybrid 3D-2D CNN-based approach and a band-selection step to enhance the capability of extracting both spectral and spatial information from the hyperspectral images. An overall accuracy of 77% was found when tumor, normal and hyper-vascularized tissue are classified, which clearly outperforms the state-of-the-art approaches (SVM, 2D-CNN). These results may open an attractive future perspective for intraoperative brain-tumor classification using HSI.
- Published
- 2021
38. Augmented-reality visualization for improved patient positioning workflow during MR-HIFU therapy
- Author
-
Manni, Francesca, Ferrer, Cyril J., Vincent, Celine E.C., Lai, Marco, Bartels, L.W., Bos, Clemens, van der Sommen, Fons, de With, Peter H.N., Linte, Cristian A., Siewerdsen, Jeffrey H., Center for Care & Cure Technology Eindhoven, Eindhoven MedTech Innovation Center, Video Coding & Architectures, and EAISI Health
- Subjects
Image-Guided Therapy ,Computer science ,Patient Tracking ,medicine.medical_treatment ,0206 medical engineering ,02 engineering and technology ,Augmented reality ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,MR-HIFU ,medicine ,Image fusion ,Computer vision ,Radiation treatment planning ,business.industry ,020601 biomedical engineering ,Patient tracking ,High-intensity focused ultrasound ,Visualization ,Workflow ,Artificial intelligence ,business - Abstract
MR-guided high-intensity focused ultrasound (MR-HIFU) is a non-invasive therapeutic technology which has demonstrated clinical potential for tissue ablation. The application of this therapeutic approach facilitated to be a promising option to achieve faster pain palliation in patients with bone metastasis. However, its clinical adoption is still hampered by a lack of workflow integration. Currently, to ensure sufficient positioning, MR images have to be repeatedly acquired in between patient re-positioning tasks, leading to a time-consuming preparation phase of at least 30 minutes, involving extra costs and time to the available treatment time. Augmented reality (AR) is a promising technology that enables the fusion of medical images, such as MRI, with the view of an external camera. AR represents a valid tool for a faster localization and visualization of the lesion during positioning. The aim of this work is the implementation of a novel AR setup for accelerating the patient positioning during MRHIFU treatments by enabling adequate target positioning outside the MRI scanner. A marker-based approach was investigated for fusing the MR data with video data for providing an augmented view. Initial experiments on four volunteers show that MR images were overlaid on the camera views with an average re-projection error of 3.13 mm, which matches the clinical requirements for this specific application. It can be concluded that the implemented system is suitable for MR-HIFU procedures and supports its clinical adoption by improving the patient positioning, thereby offering potential for faster treatment time.
- Published
- 2021
- Full Text
- View/download PDF
39. Chapter 26 - Technology and applications in interventional imaging: 2D X-ray radiography/fluoroscopy and 3D cone-beam CT
- Author
-
Schafer, Sebastian and Siewerdsen, Jeffrey H.
- Published
- 2020
- Full Text
- View/download PDF
40. Deformable 3D–2D image registration and analysis of global spinal alignment in long‐length intraoperative spine imaging.
- Author
-
Zhang, Xiaoxuan, Uneri, Ali, Huang, Yixuan, Jones, Craig K., Witham, Timothy F., Helm, Patrick A., and Siewerdsen, Jeffrey H.
- Subjects
IMAGE registration ,IMAGE analysis ,VERTEBRAE ,SPINE ,IMAGING systems ,PATIENT positioning - Abstract
Background: Spinal deformation during surgical intervention (caused by patient positioning and/or the correction of malalignment) confounds conventional navigation due to the assumptions of rigid transformation. Moreover, the ability to accurately quantify spinal alignment in the operating room would provide an assessment of the surgical product via metrics that correlate with clinical outcomes. Purpose: A method for deformable 3D–2D registration of preoperative CT to intraoperative long‐length tomosynthesis images is reported for an accurate 3D evaluation of device placement in the presence of spinal deformation and automated evaluation of global spinal alignment (GSA). Methods: Long‐length tomosynthesis ("Long Film," LF) images were acquired using an O‐arm imaging system (Medtronic, Minneapolis USA). A deformable 3D–2D patient registration was developed using multi‐scale masking (proceeding from the full‐length image to local subvolumes about each vertebra) to transform vertebral labels and planning information from preoperative CT to the LF images. Automatic measurement of GSA (main thoracic kyphosis [MThK] and lumbar lordosis [LL]) was obtained using a spline fit to registered labels. The "Known‐Component Registration" method for device registration was adapted to the multi‐scale process for 3D device localization from orthogonal LF images. The multi‐scale framework was evaluated using a deformable spine phantom in which pedicle screws were inserted, and deformations were induced over a range in LL ∼25°–80°. Further validation was carried out in a cadaver study with implanted pedicle screws and a similar range of spinal deformation. The accuracy of patient and device registration was evaluated in terms of 3D translational error and target registration error, respectively, and the accuracies of automatic GSA measurements were compared to manual annotation. Results: Phantom studies demonstrated accurate registration via the multi‐scale framework for all vertebral levels in both the neutral and deformed spine: median (interquartile range, IQR) patient registration error was 1.1 mm (0.7–1.9 mm IQR). Automatic measures of MThK and LL agreed with manual delineation within −1.1° ± 2.2° and 0.7° ± 2.0° (mean and standard deviation), respectively. Device registration error was 0.7 mm (0.4–1.0 mm IQR) at the screw tip and 0.9° (1.0°–1.5°) about the screw trajectory. Deformable 3D–2D registration significantly outperformed conventional rigid registration (p < 0.05), which exhibited device registration errors of 2.1 mm (0.8–4.1 mm) and 4.1° (1.2°–9.5°). Cadaver studies verified performance under realistic conditions, demonstrating patient registration error of 1.6 mm (0.9–2.1 mm); MThK within −4.2° ± 6.8° and LL within 1.7° ± 3.5°; and device registration error of 0.8 mm (0.5–1.9 mm) and 0.7° (0.4°–1.2°) for the multi‐scale deformable method, compared to 2.5 mm (1.0–7.9 mm) and 2.3° (1.6°–8.1°) for rigid registration (p < 0.05). Conclusion: The deformable 3D–2D registration framework leverages long‐length intraoperative imaging to achieve accurate patient and device registration over the extended lengths of the spine (up to 64 cm) even with strong anatomical deformation. The method offers a new means for the quantitative validation of spinal correction (intraoperative GSA measurement) and the 3D verification of device placement in comparison to preoperative images and planning data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Model-based three-material decomposition in dual-energy CT using the volume conservation constraint.
- Author
-
Liu, Stephen Z, Tivnan, Matthew, Osgood, Greg M, Siewerdsen, Jeffrey H, Stayman, J Webster, and Zbijewski, Wojciech
- Subjects
CONSTRAINED optimization ,CONE beam computed tomography ,STANDARD deviations ,COMPACT bone - Abstract
Objective. We develop a model-based optimization algorithm for â€one-step’ dual-energy (DE) CT decomposition of three materials directly from projection measurements. Approach. Since the three-material problem is inherently undetermined, we incorporate the volume conservation principle (VCP) as a pair of equality and nonnegativity constraints into the objective function of the recently reported model-based material decomposition (MBMD). An optimization algorithm (constrained MBMD, CMBMD) is derived that utilizes voxel-wise separability to partition the volume into a VCP-constrained region solved using interior-point iterations, and an unconstrained region (air surrounding the object, where VCP is violated) solved with conventional two-material MBMD. Constrained MBMD (CMBMD) is validated in simulations and experiments in application to bone composition measurements in the presence of metal hardware using DE cone-beam CT (CBCT). A kV-switching protocol with non-coinciding low- and high-energy (LE and HE) projections was assumed. CMBMD with decomposed base materials of cortical bone, fat, and metal (titanium, Ti) is compared to MBMD with (i) fat-bone and (ii) fat-Ti bases. Main results. Three-material CMBMD exhibits a substantial reduction in metal artifacts relative to the two-material MBMD implementations. The accuracies of cortical bone volume fraction estimates are markedly improved using CMBMD, with âĽ5â€"10Ă— lower normalized root mean squared error in simulations with anthropomorphic knee phantoms (depending on the complexity of the metal component) and âĽ2â€"2.5Ă— lower in an experimental test-bench study. Significance. In conclusion, we demonstrated one-step three-material decomposition of DE CT using volume conservation as an optimization constraint. The proposed method might be applicable to DE applications such as bone marrow edema imaging (fat-bone-water decomposition) or multi-contrast imaging, especially on CT/CBCT systems that do not provide coinciding LE and HE ray paths required for conventional projection-domain DE decomposition. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Statistical Shape Model of the Temporal Bone Using Segmentation Propagation.
- Author
-
Ding, Andy S., Lu, Alexander, Li, Zhaoshuo, Galaiya, Deepa, Ishii, Masaru, Siewerdsen, Jeffrey H., Taylor, Russell H., and Creighton, Francis X.
- Published
- 2022
- Full Text
- View/download PDF
43. Automated Registration-Based Temporal Bone Computed Tomography Segmentation for Applications in Neurotologic Surgery.
- Author
-
Ding, Andy S., Lu, Alexander, Li, Zhaoshuo, Galaiya, Deepa, Siewerdsen, Jeffrey H., Taylor, Russell H., and Creighton, Francis X.
- Abstract
Objective: This study investigates the accuracy of an automated method to rapidly segment relevant temporal bone anatomy from cone beam computed tomography (CT) images. Implementation of this segmentation pipeline has potential to improve surgical safety and decrease operative time by augmenting preoperative planning and interfacing with image-guided robotic surgical systems. Study Design: Descriptive study of predicted segmentations. Setting: Academic institution. Methods: We have developed a computational pipeline based on the symmetric normalization registration method that predicts segmentations of anatomic structures in temporal bone CT scans using a labeled atlas. To evaluate accuracy, we created a data set by manually labeling relevant anatomic structures (eg, ossicles, labyrinth, facial nerve, external auditory canal, dura) for 16 deidentified high-resolution cone beam temporal bone CT images. Automated segmentations from this pipeline were compared against ground-truth manual segmentations by using modified Hausdorff distances and Dice scores. Runtimes were documented to determine the computational requirements of this method. Results: Modified Hausdorff distances and Dice scores between predicted and ground-truth labels were as follows: malleus (0.100 ± 0.054 mm; Dice, 0.827 ± 0.068), incus (0.100 ± 0.033 mm; Dice, 0.837 ± 0.068), stapes (0.157 ± 0.048 mm; Dice, 0.358 ± 0.100), labyrinth (0.169 ± 0.100 mm; Dice, 0.838 ± 0.060), and facial nerve (0.522 ± 0.278 mm; Dice, 0.567 ± 0.130). A quad-core 16GB RAM workstation completed this segmentation pipeline in 10 minutes. Conclusions: We demonstrated submillimeter accuracy for automated segmentation of temporal bone anatomy when compared against hand-segmented ground truth using our template registration pipeline. This method is not dependent on the training data volume that plagues many complex deep learning models. Favorable runtime and low computational requirements underscore this method's translational potential. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Intraoperative cone-beam CT for head and neck surgery: Feasibility of clinical implementation using a prototype mobile C-arm
- Author
-
King, Emma, Daly, Michael J., Chan, Harley, Bachar, Gideon, Dixon, Benjamin J., Siewerdsen, Jeffrey H., and Irish, Jonathan C.
- Published
- 2013
- Full Text
- View/download PDF
45. Cone-beam CT with a noncircular (sine-on-sphere) orbit: imaging performance of a clinical system for image-guided interventions.
- Author
-
Jones, A. Kyle, Ahmad, Moiz, Raza, Shaan M., Chen, Stephen R., and Siewerdsen, Jeffrey H.
- Published
- 2024
- Full Text
- View/download PDF
46. Technical assessment of 2D and 3D imaging performance of an IGZO‐based flat‐panel X‐ray detector.
- Author
-
Sheth, Niral Milan, Uneri, Ali, Helm, Patrick A, Zbijewski, Wojciech, and Siewerdsen, Jeffrey H
- Subjects
THREE-dimensional imaging ,ELECTRONIC noise ,INDIUM gallium zinc oxide ,DETECTORS ,IMAGING systems ,SCINTILLATORS - Abstract
Background: Indirect detection flat‐panel detectors (FPDs) consisting of hydrogenated amorphous silicon (a‐Si:H) thin‐film transistors (TFTs) are a prevalent technology for digital x‐ray imaging. However, their performance is challenged in applications requiring low exposure levels, high spatial resolution, and high frame rate. Emerging FPD designs using metal oxide TFTs may offer potential performance improvements compared to FPDs based on a‐Si:H TFTs. Purpose: This work investigates the imaging performance of a new indium gallium zinc oxide (IGZO) TFT‐based detector in 2D fluoroscopy and 3D cone‐beam CT (CBCT). Methods: The new FPD consists of a sensor array combining IGZO TFTs with a‐Si:H photodiodes and a 0.7‐mm thick CsI:Tl scintillator. The FPD was implemented on an x‐ray imaging bench with system geometry emulating intraoperative CBCT. A conventional FPD with a‐Si:H TFTs and a 0.6‐mm thick CsI:Tl scintillator was similarly implemented as a basis of comparison. 2D imaging performance was characterized in terms of electronic noise, sensitivity, linearity, lag, spatial resolution (modulation transfer function, MTF), image noise (noise‐power spectrum, NPS), and detective quantum efficiency (DQE) with entrance air kerma (EAK) ranging from 0.3 to 1.2 μGy. 3D imaging performance was evaluated in terms of the 3D MTF and noise‐equivalent quanta (NEQ), soft‐tissue contrast‐to‐noise ratio (CNR), and image quality evident in anthropomorphic phantoms for a range of anatomical sites and dose, with weighted air kerma, Kw${K_w}$, ranging from 0.8 to 4.9 mGy. Results: The 2D imaging performance of the IGZO‐based FPD exhibited up to ∼1.7× lower electronic noise than the a‐Si:H FPD at matched pixel pitch. Furthermore, the IGZO FPD exhibited ∼27% increase in mid‐frequency DQE (1 mm−1) at matched pixel size and dose (EAK ≈ 1.0 μGy) and ∼11% increase after adjusting for differences in scintillator thickness. 2D spatial resolution was limited by the scintillator for each FPD. The IGZO‐based FPD demonstrated improved 3D NEQ at all spatial frequencies in both head (≥25% increase for all dose levels) and body (≥10% increase for Kw${K_w}$ ≤2 mGy) imaging scenarios. These characteristics translated to improved low‐contrast visualization in anthropomorphic phantoms, demonstrating ≥10% improvement in CNR and extension of the low‐dose range for which the detector is input‐quantum limited. Conclusion: The IGZO‐based FPD demonstrated improvements in electronic noise, image lag, and NEQ that translated to measurable improvements in 2D and 3D imaging performance compared to a conventional FPD based on a‐Si:H TFTs. The improvements are most beneficial for 2D or 3D imaging scenarios involving low‐dose and/or high‐frame rate. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Maxillary reconstruction using the scapular tip free flap: A radiologic comparison of 3D morphology
- Author
-
Pagedar, Nitin A., Gilbert, Ralph W., Chan, Harley, Daly, Michael J., Irish, Jonathan C., and Siewerdsen, Jeffrey H.
- Published
- 2012
- Full Text
- View/download PDF
48. VISUALIZATION OF ANTERIOR SKULL BASE DEFECTS WITH INTRAOPERATIVE CONE-BEAM CT
- Author
-
Bachar, Gideon, Barker, Emma, Chan, Harley, Daly, Michael J., Nithiananthan, Sajendra, Vescan, Al, Irish, Jonathan C., and Siewerdsen, Jeffrey H.
- Published
- 2010
- Full Text
- View/download PDF
49. Cone-beam-CT guided radiation therapy: technical implementation
- Author
-
Létourneau, Daniel, Wong, John W., Oldham, Mark, Gulam, Misbah, Watt, Lindsay, Jaffray, David A., Siewerdsen, Jeffrey H., and Martinez, Alvaro A.
- Published
- 2005
- Full Text
- View/download PDF
50. Prostate gland motion assessed with cine-magnetic resonance imaging (cine-MRI)
- Author
-
Ghilezan, Michel J., Jaffray, David A., Siewerdsen, Jeffrey H., Van Herk, Marcel, Shetty, Anil, Sharpe, Michael B., Zafar Jafri, Syed, Vicini, Frank A., Matter, Richard C., Brabbins, Donald S., and Martinez, Alvaro A.
- Published
- 2005
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.