60 results on '"Stephen M. Pizer"'
Search Results
2. Colon10k: A Benchmark For Place Recognition In Colonoscopy
- Author
-
Rui Wang, Yubo Zhang, Jan-Michael Frahm, Sarah K. McGill, Ruibin Ma, Stephen M. Pizer, and Julian G. Rosenman
- Subjects
For loop ,medicine.diagnostic_test ,Computer science ,business.industry ,3D reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Colonoscopy ,02 engineering and technology ,Visualization ,Region of interest ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,medicine ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Image retrieval - Abstract
Place recognition in colonoscopy is needed for various reasons. 1) If a certain region needs to be rechecked during an endoscopy, the endoscopist needs to re-localize the camera accurately to the region of interest. 2) Place recognition is needed for same-patient follow-up colonoscopy to localize the region where a polyp was cut off. 3) Recent development in colonoscopic 3D reconstruction needs place recognition to establish long-range correspondence, e.g., for loop closure. However, traditional image retrieval techniques do not generalize well in colonic images. Moreover, although place recognition or instance-level image retrieval is a widely researched topic in computer vision and several benchmarks have been published for it, there has been no specific research or benchmarks in endoscopic images, which are significantly different from common images used in traditional computer vision tasks. In this paper we present a testing dataset with manually labeled groundtruth which comprises 10126 images from 20 colonoscopic subsequences. We perform an extensive evaluation on different existing place recognition techniques using different metrics.
- Published
- 2021
3. Lighting Enhancement Aids Reconstruction of Colonoscopic Surfaces
- Author
-
Yubo Zhang, Stephen M. Pizer, Julian G. Rosenman, Sarah K. McGill, Ruibin Ma, and Shuxian Wang
- Subjects
medicine.diagnostic_test ,business.industry ,Computer science ,3D reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Colonoscopy ,Image enhancement ,Consistency (database systems) ,medicine ,Computer vision ,Artificial intelligence ,business ,Focus (optics) - Abstract
High screening coverage during colonoscopy is crucial to effectively prevent colon cancer. Previous work has allowed alerting the doctor to unsurveyed regions by reconstructing the 3D colonoscopic surface from colonoscopy videos in real-time. However, the lighting inconsistency of colonoscopy videos can cause a key component of the colonoscopic reconstruction system, the SLAM optimization, to fail. In this work we focus on the lighting problem in colonoscopy videos. To successfully improve the lighting consistency of colonoscopy videos, we have found necessary a lighting correction that adapts to the intensity distribution of recent video frames. To achieve this in real-time, we have designed and trained an RNN network. This network adapts the gamma value in a gamma-correction process. Applied in the colonoscopic surface reconstruction system, our light-weight model significantly boosts the reconstruction success rate, making a larger proportion of colonoscopy video segments reconstructable and improving the reconstruction quality of the already reconstructed segments.
- Published
- 2021
4. A Novel Method for High-Dimensional Anatomical Mapping of Extra-Axial Cerebrospinal Fluid: Application to the Infant Brain
- Author
-
Mahmoud Mostapha, Sun Hyung Kim, Alan C. Evans, Stephen R. Dager, Annette M. Estes, Robert C. McKinstry, Kelly N. Botteron, Guido Gerig, Stephen M. Pizer, Robert T. Schultz, Heather C. Hazlett, Joseph Piven, Jessica B. Girault, Mark D. Shen, and Martin A. Styner
- Subjects
Brain development ,Extra axial ,autism ,extra-axial cerebrospinal fluid ,brain development ,High dimensional ,Biology ,lcsh:RC321-571 ,EA-CSF ,03 medical and health sciences ,0302 clinical medicine ,Cerebrospinal fluid ,Neuroimaging ,medicine ,Laplacian PDE ,Segmentation ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,structural MRI ,Original Research ,030304 developmental biology ,0303 health sciences ,neurodevelopmental disorders ,General Neuroscience ,surface analysis ,medicine.anatomical_structure ,Cerebral cortex ,Subarachnoid space ,Neuroscience ,030217 neurology & neurosurgery - Abstract
Cerebrospinal fluid (CSF) plays an essential role in early postnatal brain development. Extra-axial CSF (EA-CSF) volume, which is characterized by CSF in the subarachnoid space surrounding the brain, is a promising marker in the early detection of young children at risk for neurodevelopmental disorders. Previous studies have focused on global EA-CSF volume across the entire dorsal extent of the brain, and not regionally-specific EA-CSF measurements, because no tools were previously available for extracting local EA-CSF measures suitable for localized cortical surface analysis. In this paper, we propose a novel framework for the localized, cortical surface-based analysis of EA-CSF. The proposed processing framework combines probabilistic brain tissue segmentation, cortical surface reconstruction, and streamline-based local EA-CSF quantification. The quantitative analysis of local EA-CSF was applied to a dataset of typically developing infants with longitudinal MRI scans from 6 to 24 months of age. There was a high degree of consistency in the spatial patterns of local EA-CSF across age using the proposed methods. Statistical analysis of local EA-CSF revealed several novel findings: several regions of the cerebral cortex showed reductions in EA-CSF from 6 to 24 months of age, and specific regions showed higher local EA-CSF in males compared to females. These age-, sex-, and anatomically-specific patterns of local EA-CSF would not have been observed if only a global EA-CSF measure were utilized. The proposed methods are integrated into a freely available, open-source, cross-platform, user-friendly software tool, allowing neuroimaging labs to quantify local extra-axial CSF in their neuroimaging studies to investigate its role in typical and atypical brain development.
- Published
- 2020
5. RNNSLAM: Reconstructing the 3D colon to visualize missing regions during a colonoscopy
- Author
-
Ruibin Ma, Stephen M. Pizer, Julian G. Rosenman, Sarah K. McGill, Jan-Michael Frahm, Yubo Zhang, and Rui Wang
- Subjects
Computer science ,Colon ,Colonoscopy ,Colonic Polyps ,Health Informatics ,Simultaneous localization and mapping ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Deep learning ,Perspective (graphical) ,Gold standard (test) ,Computer Graphics and Computer-Aided Design ,Recurrent neural network ,Pose prediction ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Detection rate ,business ,030217 neurology & neurosurgery - Abstract
Colonoscopy is the gold standard for pre-cancerous polyps screening and treatment. The polyp detection rate is highly tied to the percentage of surveyed colonic surface. However, current colonoscopy technique cannot guarantee that all the colonic surface is well examined because of incomplete camera orientations and of occlusions. The missing regions can hardly be noticed in a continuous first-person perspective. Therefore, a useful contribution would be an automatic system that can compute missing regions from an endoscopic video in real-time and alert the endoscopists when a large missing region is detected. We present a novel method that reconstructs dense chunks of a 3D colon in real time, leaving the unsurveyed part unreconstructed. The method combines a standard SLAM system with a depth and pose prediction network to achieve much more robust tracking and less drift. It addresses the difficulties for colonoscopic images of existing simultaneous localization and mapping (SLAM) systems and end-to-end deep learning methods.
- Published
- 2020
6. Real-Time 3D Reconstruction of Colonoscopic Surfaces for Determining Missing Regions
- Author
-
Ruibin Ma, Sarah K. McGill, Stephen M. Pizer, Rui Wang, Julian G. Rosenman, and Jan-Michael Frahm
- Subjects
Surface (mathematics) ,Endoscope ,Computer science ,Pipeline (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Colonoscopy ,02 engineering and technology ,Simultaneous localization and mapping ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Computer vision ,Large intestine ,medicine.diagnostic_test ,business.industry ,3D reconstruction ,Cancer ,medicine.disease ,medicine.anatomical_structure ,020201 artificial intelligence & image processing ,Artificial intelligence ,Scale (map) ,business - Abstract
Colonoscopy is the most widely used medical technique to screen the human large intestine (colon) for cancer precursors. However, frequently parts of the surface are not visualized, and it is hard for the endoscopist to realize that from the video. Non-visualization derives from lack of orientations of the endoscope to the full circumference of parts of the colon, occlusion from colon structures, and intervening materials inside the colon. Our solution is real-time dense 3D reconstruction of colon chunks with display of the missing regions. We accomplish this by a novel deep-learning-driven dense SLAM (simultaneous localization and mapping) system that can produce a camera trajectory and a dense reconstructed surface for colon chunks (small lengths of colon). Traditional SLAM systems work poorly for the low-textured colonoscopy frames and are subject to severe scale/camera drift. In our method a recurrent neural network (RNN) is used to predict scale-consistent depth maps and camera poses of successive frames. These outputs are incorporated into a standard SLAM pipeline with local windowed optimization. The depth maps are finally fused into a global surface using the optimized camera poses. To the best of our knowledge, we are the first to reconstruct dense colon surface from video in real time and to display missing surface.
- Published
- 2019
7. Features for the Detection of Flat Polyps in Colonoscopy Video
- Author
-
Sarah K. McGill, Jared Vicory, Stephen M. Pizer, Julian G. Rosenman, and Miao Fan
- Subjects
medicine.medical_specialty ,Endoscope ,medicine.diagnostic_test ,business.industry ,Colorectal cancer ,Colonoscopy ,Cancer ,medicine.disease ,digestive system diseases ,Optical colonoscopy ,Medicine ,Radiology ,business ,Cancer death - Abstract
Colorectal cancer is the second most common cause of cancer death in the United States, with an estimated 140,000 new cases leading to 50,000 deaths this year. The best treatment is to detect and treat the cancer before it becomes invasive and spreads. The most common form of detection is the use of optical colonoscopy in which the clinician visually inspects the surface of the colon through an endoscope to detect the presence of polyps. Studies have shown that even the best clinicians will sometimes miss polyps, especially the more subtle flat polyps, and that many cancers that develop in the years immediately following a colonoscopy likely originate from missed polyps. In this paper we describe techniques for extracting several medically-driven features from colonoscopy video that can be used to detect the presence of flat polyps. Initial quantitative and qualitative results show that each of these features on their own provide some level of discrimination and, when combined, have the potential to support robust detection of flat polyps.
- Published
- 2018
8. SU-E-J-64: Towards a Patient Specific Deformation Model in the Male Pelvis for IGRT via Limited Angle Imaging
- Author
-
Chen-Rui Chou, Stephen M. Pizer, David S. Lalush, C Frederick, and Sha Chang
- Subjects
medicine.medical_specialty ,Similarity (geometry) ,Computer science ,business.industry ,Image quality ,Radiography ,General Medicine ,Iterative reconstruction ,Deformation (meteorology) ,Displacement (vector) ,Surgery ,Transformation (function) ,medicine.anatomical_structure ,Prostate ,medicine ,Medical imaging ,Computer vision ,Artificial intelligence ,business ,Digital radiography ,Image-guided radiation therapy - Abstract
Purpose: To evaluate the feasibility of patient specific deformation models (PSDM) in the male pelvis for IGRT by limited angular imaging.Methods: In IGRT via limited angular imaging, insufficient angular projections are acquired to uniquely determine a 3D attenuation distribution. For highly limited geometries, image quality may be too poor for successful non‐rigid registration. This can be overcome by restricting the transformation space to one containing only feasible transformations learned from prior 3D images. This has been successfully applied in the lung region where a majority of deformation is due to respiratory motion which can be adequately observed at planning time with RCCT. Typically, the phases of the RCCT are registered together to form an group‐wise mean image and transformations to each training image. PCA is then performed on the transformation displacement vector fields. The transformation is found at treatment time by registration of digitally reconstructedradiographs of the transformed image to the measured projections, optimizing over the parameters of the PCA subspace. In the male pelvis, deformation is much more complicated than respiratory deformation and is largely inter‐fractional due to changes in bladder and rectal contents, articulation, and motion of the bowels. A similar model is developed for the male pelvis which takes into account pelvic anatomical information and handles the more complicated deformation space. Results: Using the leave‐one‐out method, dice similarity coefficients in the prostate compared with manual segmentations are increased over the those obtained by rigid registration and are comparable with those obtained by 3D non‐rigid registration methods. Conclusions: This method produces better results than rigid registration and is comparable with results obtained by 3D/3D registration even though it uses limited angle projections. However, its relies on daily training CTs, so it is not yet a viable clinical method. Funding provided in part by Siemens Medical
- Published
- 2017
9. The Endoscopogram: A 3D Model Reconstructed from Endoscopic Video Frames
- Author
-
Marc Niethammer, Ron Alterovitz, True Price, Stephen M. Pizer, Qingyu Zhao, and Julian G. Rosenman
- Subjects
Surface (mathematics) ,Computer science ,business.industry ,medicine.medical_treatment ,Frame (networking) ,02 engineering and technology ,Pipeline (software) ,Imaging phantom ,030218 nuclear medicine & medical imaging ,Visualization ,Radiation therapy ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Focus (optics) ,Spatial analysis - Abstract
Endoscopy enables high resolution visualization of tissue texture and is a critical step in many clinical workflows, including diagnosis and radiation therapy treatment planning for cancers in the nasopharynx. However, an endoscopic video does not provide explicit 3D spatial information, making it difficult to use in tumor localization, and it is inefficient to review. We introduce a pipeline for automatically reconstructing a textured 3D surface model, which we call an endoscopogram, from multiple 2D endoscopic video frames. Our pipeline first reconstructs a partial 3D surface model for each input individual 2D frame. In the next step (which is the focus of this paper), we generate a single high-quality 3D surface model using a groupwise registration approach that fuses multiple, partially overlapping, incomplete, and deformed surface models together. We generate endoscopograms from synthetic, phantom, and patient data and show that our registration approach can account for tissue deformations and reconstruction inconsistency across endoscopic video frames.
- Published
- 2016
10. Sa1930 MISSED COLONIC SURFACE AREA AT COLONOSCOPY CAN BE CALCULATED WITH COMPUTERIZED 3D RECONSTRUCTION
- Author
-
Marc Niethammer, Julian G. Rosenman, Stephen M. Pizer, Sarah K. McGill, Ruibin Ma, Miao Fan, Qingyu Zhao, Joel E. Tepper, Ron Alterovitz, Rui Wang, and Jan-Michael Frahm
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,3D reconstruction ,Gastroenterology ,medicine ,Colonoscopy ,Radiology, Nuclear Medicine and imaging ,Radiology ,business - Published
- 2018
11. Comparison of human and automatic segmentations of kidneys from CT images
- Author
-
Manjori Rao, Stephen M. Pizer, Gregg Tracton, Edward L. Chaney, Joshua V. Stough, Keith E. Muller, and Yueh-Yun Chi
- Subjects
Cancer Research ,Radiation ,business.industry ,Initialization ,Image processing ,Pattern recognition ,Image segmentation ,Kidney ,computer.software_genre ,Hausdorff distance ,Oncology ,Voxel ,Image Interpretation, Computer-Assisted ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Artificial intelligence ,Principal geodesic analysis ,Tomography, X-Ray Computed ,business ,computer ,Volume (compression) - Abstract
Purpose: A controlled observer study was conducted to compare a method for automatic image segmentation with conventional user-guided segmentation of right and left kidneys from planning computerized tomographic (CT) images. Methods and Materials: Deformable shape models called m-reps were used to automatically segment right and left kidneys from 12 target CT images, and the results were compared with careful manual segmentations performed by two human experts. M-rep models were trained based on manual segmentations from a collection of images that did not include the targets. Segmentation using m-reps began with interactive initialization to position the kidney model over the target kidney in the image data. Fully automatic segmentation proceeded through two stages at successively smaller spatial scales. At the first stage, a global similarity transformation of the kidney model was computed to position the model closer to the target kidney. The similarity transformation was followed by large-scale deformations based on principal geodesic analysis (PGA). During the second stage, the medial atoms comprising the m-rep model were deformed one by one. This procedure was iterated until no changes were observed. The transformations and deformations at both stages were driven by optimizing an objective function with two terms. One term penalized the currently deformed m-rep by an amount proportional to its deviation from the mean m-rep derived from PGA of the training segmentations. The second term computed a model-to-image match term based on the goodness of match of the trained intensity template for the currently deformed m-rep with the corresponding intensity data in the target image. Human and m-rep segmentations were compared using quantitative metrics provided in a toolset called Valmet. Metrics reported in this article include (1) percent volume overlap; (2) mean surface distance between two segmentations; and (3) maximum surface separation (Hausdorff distance). Results: Averaged over all kidneys the mean surface separation was 0.12 cm, the mean Hausdorff distance was 0.99 cm, and the mean volume overlap for human segmentations was 88.8%. Between human and m-rep segmentations the mean surface separation was 0.18‐0.19 cm, the mean Hausdorff distance was 1.14‐1.25 cm, and the mean volume overlap was 82‐83%. Conclusions: Overall in this study, the best m-rep kidney segmentations were at least as good as careful manual slice-by-slice segmentations performed by two experienced humans, and the worst performance was no worse than typical segmentations from our clinical setting. The mean surface separations for human‐m-rep segmentations were slightly larger than for human‐human segmentations but still in the subvoxel range, and volume overlap and maximum surface separation were slightly better for human‐human comparisons. These results were expected because of experimental factors that favored comparison of the human‐human segmentations. In particular, m-rep agreement with humans appears to have been limited largely by fundamental differences between manual slice-by-slice and true three-dimensional segmentation, imaging artifacts, image voxel dimensions, and the use of an m-rep model that produced a smooth surface across the renal pelvis. © 2005 Elsevier Inc. Image segmentation, Kidney, Treatment planning.
- Published
- 2005
12. Measuring tortuosity of the intracerebral vasculature from MRA images
- Author
-
Weili Lin, Stephen M. Pizer, Elizabeth Bullitt, Stephen R. Aylward, and Guido Gerig
- Subjects
medicine.medical_specialty ,Curvature ,Severity of Illness Index ,Tortuosity ,Article ,Magnetic resonance angiography ,Imaging, Three-Dimensional ,Predictive Value of Tests ,Region of interest ,Image Interpretation, Computer-Assisted ,Humans ,Medicine ,Segmentation ,Electrical and Electronic Engineering ,Neovascularization, Pathologic ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Brain Neoplasms ,business.industry ,Brain ,Pattern recognition ,Image segmentation ,Cerebral Angiography ,Computer Science Applications ,Cerebrovascular Disorders ,Cerebrovascular Circulation ,Metric (mathematics) ,Radiology ,Artificial intelligence ,Abnormality ,business ,Magnetic Resonance Angiography ,Software - Abstract
The clinical recognition of abnormal vascular tortuosity, or excessive bending, twisting, and winding, is important to the diagnosis of many diseases. Automated detection and quantitation of abnormal vascular tortuosity from three-dimensional (3-D) medical image data would, therefore, be of value. However, previous research has centered primarily upon two-dimensional (2-D) analysis of the special subset of vessels whose paths are normally close to straight. This report provides the first 3-D tortuosity analysis of clusters of vessels within the normally tortuous intracerebral circulation. We define three different clinical patterns of abnormal tortuosity. We extend into 3-D two tortuosity metrics previously reported as useful in analyzing 2-D images and describe a new metric that incorporates counts of minima of total curvature. We extract vessels from MRA data, map corresponding anatomical regions between sets of normal patients and patients with known pathology, and evaluate the three tortuosity metrics for ability to detect each type of abnormality within the region of interest. We conclude that the new tortuosity metric appears to be the most effective in detecting several types of abnormalities. However, one of the other metrics, based on a sum of curvature magnitudes, may be more effective in recognizing tightly coiled, "corkscrew" vessels associated with malignant tumors.
- Published
- 2003
13. Shape-correlated Deformation Statistics for Respiratory Motion Prediction in 4D Lung
- Author
-
Xiaoxiao Liu, Stephen M. Pizer, Gig S. Mageras, and Ipek Oguz
- Subjects
Lung ,business.industry ,Computer science ,medicine.medical_treatment ,Quantitative Biology::Tissues and Organs ,Dynamics (mechanics) ,Physics::Medical Physics ,Motion (geometry) ,Deformation (meteorology) ,medicine.disease ,Article ,Radiation therapy ,medicine.anatomical_structure ,Motion estimation ,medicine ,Computer vision ,Artificial intelligence ,business ,Lung cancer ,Image-guided radiation therapy - Abstract
4D image-guided radiation therapy (IGRT) for free-breathing lungs is challenging due to the complicated respiratory dynamics. Effective modeling of respiratory motion is crucial to account for the motion affects on the dose to tumors. We propose a shape-correlated statistical model on dense image deformations for patient-specic respiratory motion estimation in 4D lung IGRT. Using the shape deformations of the high-contrast lungs as the surrogate, the statistical model trained from the planning CTs can be used to predict the image deformation during delivery verication time, with the assumption that the respiratory motion at both times are similar for the same patient. Dense image deformation fields obtained by diffeomorphic image registrations characterize the respiratory motion within one breathing cycle. A point-based particle optimization algorithm is used to obtain the shape models of lungs with group-wise surface correspondences. Canonical correlation analysis (CCA) is adopted in training to maximize the linear correlation between the shape variations of the lungs and the corresponding dense image deformations. Both intra- and inter-session CT studies are carried out on a small group of lung cancer patients and evaluated in terms of the tumor location accuracies. The results suggest potential applications using the proposed method.
- Published
- 2013
14. A New Technique for CT/MR Fusion For Skull Base Imaging
- Author
-
Mauricio Castillo, Vincent N. Carrasco, Suresh K. Mukherji, Stephen M. Pizer, Aziz A. Boxwala, Mitchell Soltys, and Julian G. Rosenman
- Subjects
medicine.medical_specialty ,Image fusion ,Modality (human–computer interaction) ,business.industry ,Soft tissue ,Articles ,computer.file_format ,Neurovascular bundle ,Surgical planning ,Surgery ,Skull ,medicine.anatomical_structure ,Medicine ,Neurology (clinical) ,Radiology ,Image file formats ,business ,Radiation treatment planning ,computer - Abstract
This paper presents our initial experience utilizing a new technique which allows CT and MR image fusion in patients with skull base lesions. Eleven patients with a variety of skull base lesions underwent CT and MR imaging prior to surgery. Both sets of images were coregistered using customized software. The CT and MR data sets were then combined and viewed in a single interactive image formar using a high-speed graphic computing system. Image fusion allowed simultaneous visualization of the bony skull base anatomy (CT) and detailed soft tissue anatomy (MR) using a single image format. Combining both modalities was felt to provide a better assessment of the extent of lesions and improve understanding of their relationship to adjacent bony and neurovascular anatomy. Specifically, image fusion enhanced awareness of location of skill base lesions with respect to the cavernous sinuses. Gasserian ganglia, carotid arteries, and jugular foramina. For tumors arising within the internal auditory canal (IAC), fused images allowed better delineation of the lateral aspect of the lesion with respect to the fundus of the IAC. Thus, fusion of CT and MR studies provides a unique image format which has advantages over single modality display. We believe image fusion is beneficial for surgical planning and for treatment planning of complex skull base malignancies treated with radiotherapy.
- Published
- 1996
15. A method for determination of optimal image enhancement for the detection of mammographic abnormalities
- Author
-
R. Eugene Johnston, Robert McLelland, Keith E. Muller, Stephen M. Pizer, Bradley M. Hemminger, Christina A. Burbeck, Etta D. Pisano, and Derek T. Puff
- Subjects
Contrast enhancement ,Workstation ,Computer science ,Breast Neoplasms ,Composite image filter ,law.invention ,Digital image ,law ,Task Performance and Analysis ,medicine ,Psychophysics ,Humans ,Mammography ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Observer Variation ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Image enhancement ,Computer Science Applications ,Radiographic Image Enhancement ,Evaluation Studies as Topic ,Female ,Artificial intelligence ,business ,Algorithms ,Normal breast - Abstract
We present a paradigm for empirical evaluation of digital image enhancement algorithms for mammography that uses psychophysical methods for implementation and analysis of a clinically relevant detection task. In the experiment, the observer is asked to detect and assign to a quadrant, or indicate the absence of, a simulated mammographic structure characteristic of cancer embedded in a background image of normal breast tissue. Responses are indicated interactively on a computer workstation. The parameter values for the enhancement applied to the composite image may be varied on each trial, and structure detection performance is estimated for each enhancement condition. Preliminary investigations have provided insight into an appropriate viewing duration, and furthermore, suggest that nonradiologists may be used under this methodology for the tasks investigated thus far, for predicting parameter values for clinical investigation. We are presently using this method in evaluating several contrast enhancement algorithms of possible benefit in mammography. These methods enable an objective, clinically relevant evaluation, for the purpose of optimal parameter determination or performance assessment, of digital image-processing methods potentially used in mammography.
- Published
- 1994
16. Portal film enhancement: Technique and clinical utility
- Author
-
Stephen M. Pizer, Cheryl A. Roe, Keith E. Muller, Robert Cromartie, and Julian G. Rosenman
- Subjects
Observer Variation ,Cancer Research ,Radiation ,Radiotherapy ,business.industry ,media_common.quotation_subject ,Radiographic Image Enhancement ,Low contrast ,Oncology ,Humans ,Contrast (vision) ,Medicine ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Adaptive histogram equalization ,Artificial intelligence ,business ,Technology, Radiologic ,media_common ,Unsharp masking - Abstract
We report on the results a 3-year project which had as its goal the development of methods to enhance radiation portal films to improve their readability. We had previously reported on a portal film enhancement technique, contrast limited adaptive histogram equalization, which could enhance low contrast detail, but degraded sharply contrasted edges. A new method, unsharp masking followed by contrast limited adaptive histogram equalization, now appears to overcome this problem. A clinical trial to test whether enhanced portal films could be read more accurately than standard ones was undertaken. The trial involved 12 readers from two institutions doing 276 readings. In this trial the enhanced films were judged to be of higher quality than the non-enhanced films (p < .001) and were read more accurately (p = .026). The usefulness and difficulties of routinely performing portal film enhancement in a busy radiation therapy department are discussed.
- Published
- 1993
17. Defining anatomical structures from medical images
- Author
-
Stephen M. Pizer and Edward L. Chaney
- Subjects
Cancer Research ,Pathology ,medicine.medical_specialty ,Reproducibility ,business.industry ,Anatomical structures ,Machine learning ,computer.software_genre ,Edge detection ,Oncology ,Margin (machine learning) ,Histogram ,medicine ,Dose escalation ,Radiology, Nuclear Medicine and imaging ,Artificial intelligence ,User interface ,Radiation treatment planning ,business ,computer - Abstract
Defining anatomical objects in medical images is a critical step in 3D treatment planning. The accuracy and reproducibility of this step affects targeting, optimization based on dose-volume histograms or other volume-based measures, and the development of biological models for tumor control and complication probabilities. Efficiency is another important consideration. The current standard of practice based on edge detection is inadequate in the modern era of conformal therapy, tight margin, and dose escalation. More sophisticated approaches based on computer vision techniques, such as those discussed here, need to be further studied in the laboratory and tested in the clinical setting to verify accuracy and reproducibility, and to develop clinically reliable performance and efficient user interfaces.
- Published
- 1992
18. A shape-navigated image deformation model for 4D lung respiratory motion estimation
- Author
-
Rohit Ramesh Saboo, Xiaoxiao Liu, Stephen M. Pizer, and Gig S. Mageras
- Subjects
Lung ,business.industry ,Statistical shape analysis ,Dynamics (mechanics) ,Article ,medicine.anatomical_structure ,Feature (computer vision) ,Motion estimation ,medicine ,Medical imaging ,Breathing ,Computer vision ,Artificial intelligence ,business ,Intensity modulation - Abstract
Intensity modulated radiation therapy (IMRT) for cancers in the lung remains challenging due to the complicated respiratory dynamics. We propose a shape-navigated dense image deformation model to estimate the patient-specific breathing motion using 4D respiratory correlated CT (RCCT) images. The idea is to use the shape change of the lungs, the major motion feature in the thorax image, as a surrogate to predict the corresponding dense image deformation from training.To build the statistical model, dense diffeomorphic deformations between images of all other time points to the image at end expiration are calculated, and the shapes of the lungs are automatically extracted. By correlating the shape variation with the temporally corresponding image deformation variation, a linear mapping function that maps a shape change to its corresponding image deformation is calculated from the training sample. Finally, given an extracted shape from the image at an arbitrary time point, its dense image deformation can be predicted from the pre-computed statistics.The method is carried out on two patients and evaluated in terms of the tumor and lung estimation accuracies. The result shows robustness of the model and suggests its potential for 4D lung radiation treatment planning.
- Published
- 2009
19. Autosegmentation of images in radiation oncology
- Author
-
Stephen M. Pizer and Edward L. Chaney
- Subjects
medicine.medical_specialty ,Computer science ,Artificial Intelligence ,Neoplasms ,Radiation oncology ,Image Interpretation, Computer-Assisted ,medicine ,Radiation Oncology ,Humans ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Algorithms ,Pattern Recognition, Automated - Published
- 2009
20. The medical image display and analysis group at the University of North Carolina: reminiscences and philosophy
- Author
-
Stephen M. Pizer
- Subjects
Diagnostic Imaging ,Standardization ,Universities ,Realization (linguistics) ,computer.software_genre ,Efficiency, Organizational ,Display device ,Scale space ,User-Computer Interface ,Imaging, Three-Dimensional ,Monitoring, Intraoperative ,North Carolina ,Medicine ,Organizational Objectives ,Segmentation ,Electrical and Electronic Engineering ,Schools, Medical ,Focus (computing) ,Radiological and Ultrasound Technology ,Multimedia ,business.industry ,Image segmentation ,Image Enhancement ,Computer Science Applications ,Adaptive histogram equalization ,Interdisciplinary Communication ,business ,computer ,Software ,Algorithms - Abstract
The period of the Medical Image Display and Analysis Group (MIDAG) so far is 1974-2002: more than 27 years. We began with a focus on two-dimensional (2-D) display: contrast enhancement, display scale choice, and display device standardization. We co-invented adaptive histogram equalization and later improved it to contrast-limited AHE, and we were perhaps the first to show that adaptive contrast enhancement, i.e., care in the mapping between recorded and displayed intensity and variation of that mapping with the local properties of the image, could significantly affect diagnostic or therapeutic decisions. MIDAG prides itself in having affected medical practice and, thus, the lives of patients. Despite the fact that bringing research from conception to actual medical use is a process sometimes taking a decade, the largest fraction, perhaps all, of our graduate students and faculty are attracted to these applications of computers by this altruism. Areas in which MIDAG research has come to this fruition are the uses of color display in nuclear medicine, the standardization of CRT display and the realization of how many bits of intensity are needed, and the use of tested contrast enhancement methods in areas of medical image use where subtle changes must be detected. Medical areas where we have had an effect are mammography, a major target area for both the standardization and contrast enhancement ends, and portal imaging in radiotherapy, a target area for contrast enhancement. In the 1980s, some of MIDAG's attention moved to image analysis. Also beginning in the 1980s we began to make some contributions to the notions of scale space description of images. With emphasis on the development of segmentation by deformable models and our aforementioned principle that validation is a critical part of research developing image analysis and display methods, we have begun to seriously face the issues of how to validate segmentation and how to choose the parameters of a segmentation method. Our experimental design and analysis techniques involve a variety of new methods for repeated variables designs.
- Published
- 2003
21. Caudate Shape Discrimination in Schizophrenia Using Template-Free Non-parametric Tests
- Author
-
Jeffrey A. Lieberman, Y. Sampath K. Vetsa, Martin Styner, Stephen M. Pizer, and Guido Gerig
- Subjects
Template free ,business.industry ,Computer science ,Feature vector ,Statistical shape analysis ,Nonparametric statistics ,Hippocampus ,Pattern recognition ,medicine.disease ,Schizophrenia ,Basal ganglia ,medicine ,Artificial intelligence ,business ,Shape analysis (digital geometry) - Abstract
This paper describes shape analysis of the caudate nucleus structure in a large schizophrenia study (30 controls, 60 schizophrenics). Although analysis of the caudate has not drawn the same attention as the hippocampus, it is a key basal ganglia structure shown to present differences in early development (e.g. autism) and also to present changes due to drug treatment. Left and right caudate were segmented from high resolution MRI using a reliable, semi-automated technique. Shapes were parametrized by a surface description, aligned, and finally represented as medial mesh structures (m-reps). Since schizophrenia patients were categorized based on treatment, we could test size and shape differences between normals, atypically and typically treated subjects. Statistical shape analysis used permutation tests on objects represented by medial representations. This allowed us to bypass the common problems of feature reduction inherent to low sample size and high dimensional feature vectors. Moreover, this test is non-parametric and does not require the choice of a shape template. The choice of medial shape representations led to a separate testing of global and local growth versus deformation. Results show significant caudate size and shape differences, not only between treatment groups and controls, but also among the treatment groups. Shape differences were not found when both treatment groups were grouped into one patient group and compared to controls. There was a clear localization of width and deformation change in the caudate head. As with other clinical studies utilizing shape analysis, results need to be confirmed in new, independent studies to get full confidence in the interpretation of these findings.
- Published
- 2003
22. Medially Based Meshing with Finite Element Analysis of Prostate Deformation
- Author
-
Stephen M. Pizer, Jessica R. Crouch, Edward L. Chaney, and M Zaider
- Subjects
medicine.medical_specialty ,Deformation (mechanics) ,Computer science ,medicine.medical_treatment ,Physics::Medical Physics ,Brachytherapy ,Deformation (meteorology) ,System of linear equations ,Finite element method ,Imaging phantom ,Computer Science::Other ,Surgery ,Computer Science::Computer Vision and Pattern Recognition ,medicine ,Smoothed finite element method ,Polygon mesh ,Hexahedron ,Algorithm - Abstract
The finite element method (FEM) is well suited for use in the non-rigid registration of magnetic resonance spectroscopy images (MRSI) with intraoperative ultrasound images of the prostate because FEM provides a principled method for modeling the physical deformation caused when the MRSI intra-rectal imaging probe compresses the prostate. However, FEM requires significant labor and computational time to construct a finite element model and solve the resulting large system of equations. In particular, any finite element based registration method must address the questions of how to generate a mesh from an image and how to solve the system of finite element equations efficiently. This paper focuses on how m-rep image segmentations can be used to generate high quality multi-scale hexahedral meshes for use with FEM. Results from the application of this method to the registration of CT images of a prostate phantom with implanted brachytherapy seeds are presented.
- Published
- 2003
23. Contrast-limited adaptive histogram equalization: speed and effectiveness
- Author
-
Keith E. Muller, R. E. Johnston, Stephen M. Pizer, Bonnie C. Yankaskas, and J.P. Ericksen
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computed tomography ,Histogram ,Clinical diagnosis ,medicine ,Medical imaging ,Computer vision ,Adaptive histogram equalization ,Artificial intelligence ,Tomography ,business - Abstract
An experiment intended to evaluate the clinical application of contrast-limited adaptive histogram equalization (CLAHE) to chest computer tomography (CT) images is reported. A machine especially designed to compute CLAHE in a few seconds is discussed. It is shown that CLAHE can be computed in 4 s after 5-s loading time using the specially designed parallel engine made from a few thousand dollars worth of off-the-shelf components. The processing appears to be useful for a wide range of medical images, but the limitations of observer calibration make it impossible to demonstrate such usefulness by agreement experiments. >
- Published
- 2002
24. Image processing algorithms for digital mammography: a pictorial essay
- Author
-
Etta D. Pisano, Mark B. Williams, Laurie L. Fajardo, Stephen M. Pizer, Emily F. Conant, Andrew D. A. Maidment, Loren T. Niklason, Martin J. Yaffe, Marylee E. Brown, Elodia B. Cole, Stephen R. Aylward, Bradley M. Hemminger, R. E. Johnston, and Daniel B. Kopans
- Subjects
medicine.medical_specialty ,Digital mammography ,medicine.diagnostic_test ,business.industry ,Visibility (geometry) ,Image processing ,Breast Diseases ,Histogram ,Digital image processing ,Image Processing, Computer-Assisted ,Medicine ,Mammography ,Humans ,Radiology, Nuclear Medicine and imaging ,Adaptive histogram equalization ,Computer vision ,Female ,Radiology ,Artificial intelligence ,business ,Algorithms ,Unsharp masking - Abstract
Digital mammography systems allow manipulation of fine differences in image contrast by means of image processing algorithms. Different display algorithms have advantages and disadvantages for the specific tasks required in breast imaging-diagnosis and screening. Manual intensity windowing can produce digital mammograms very similar to standard screen-film mammograms but is limited by its operator dependence. Histogram-based intensity windowing improves the conspicuity of the lesion edge, but there is loss of detail outside the dense parts of the image. Mixture-model intensity windowing enhances the visibility of lesion borders against the fatty background, but the mixed parenchymal densities abutting the lesion may be lost. Contrast-limited adaptive histogram equalization can also provide subtle edge information but might degrade performance in the screening setting by enhancing the visibility of nuisance information. Unsharp masking enhances the sharpness of the borders of mass lesions, but this algorithm may make even an indistinct mass appear more circumscribed. Peripheral equalization displays lesion details well and preserves the peripheral information in the surrounding breast, but there may be flattening of image contrast in the nonperipheral portions of the image. Trex processing allows visualization of both lesion detail and breast edge information but reduces image contrast.
- Published
- 2000
25. Registration of 3D cerebral vessels with 2D digital angiograms: clinical evaluation
- Author
-
Suresh K. Mukherji, Christopher S. Coffey, Alan Liu, Jeffrey A. Stone, Stephen M. Pizer, Stephen R. Aylward, Keith E. Muller, and Elizabeth Bullitt
- Subjects
medicine.diagnostic_test ,business.industry ,Mr angiography ,Subtraction ,Angiography, Digital Subtraction ,Tracking system ,Cerebral Angiography ,Carotid Arteries ,Angiography ,View plane ,Image Processing, Computer-Assisted ,Sign test ,Medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Artificial intelligence ,business ,Clinical evaluation ,Magnetic Resonance Angiography - Abstract
The purpose of this study was to evaluate the accuracy and speed of a new, semiautomatic method of three-dimensional (3D)-two-dimensional (2D) vascular registration. This method should help guide endovascular procedures by allowing interpretation of each digital subtraction angiographic (DSA) image in terms of precreated, 3D vessel trees that contain "parent-child" connectivity information.Connected, 3D vessel trees were created from segmented magnetic resonance (MR) angiograms. Eleven total DSA images were registered with such trees by using both our method and the current standard (manual registration). The accuracy of each method was compared by using repeated-measures analysis of variance with correction for heterogeneity of variance to evaluate separation of curve pairs on the view plane. Subjective clinical comparisons of the two registration methods were evaluated with the sign test. Registration times were evaluated for both methods and also as a function of the error in the initial estimate of MR angiographic position.The new registration method produced results that were numerically superior to those of manual registration (P.001) and was subjectively judged to be as good as or better by clinical reviewers. Registration time with the new method was faster (P.001). If the rotational error in the initial estimate of MR angiographic position is less than 10 degrees around each axis, the registration itself took only 1-2 minutes.This method is quicker than and produces results as good as or better than those of manual registration. This method should be able to calculate an initial registration matrix during endovascular embolization and adjust that matrix intermittently with registration updates provided by automatic tracking systems.
- Published
- 2000
26. SU-FF-J-17: Image-Guided Radiotherapy Using Nanotube Stantionary Tomosynthesis Technology
- Author
-
Gregg Tracton, Stephen M. Pizer, B Frederick, Sha Chang, David S. Lalush, Michael S. Lawrence, and Xiaoxiao Liu
- Subjects
medicine.medical_specialty ,business.industry ,Computer science ,medicine.medical_treatment ,Image registration ,Soft tissue ,General Medicine ,Tomosynthesis ,Digital Tomosynthesis Mammography ,Radiation therapy ,medicine.anatomical_structure ,Prostate ,Temporal resolution ,Medical imaging ,medicine ,Dosimetry ,Medical physics ,Computer vision ,Artificial intelligence ,business ,Image-guided radiation therapy - Abstract
Purpose: To develop image guidance approaches for clinical application of Nanotube Stationary Tomosynthesis (NST), a nanotechnology‐based online image‐guidedradiotherapy(IGRT)technology that is capable of unprecedented temporal‐resolution and imaging speed. Method and Materials: NST is an accelerator gantry‐mounted multi‐source array kV imagingtechnology under development by Siemens. Using a single image panel, the patient can be imaged by 50 x‐ray sources on the array in the treatment volume within ∼ 2 sec before and even during treatment. However, translating the NST's imaging acquisition strengths into real clinical benefits requires significant software development including new image registration tools that take full advantage of the real‐time aspects of tomosynthesisimaging while minimizing its intrinsic resolution limitation. We propose a fast, versatile image registration approach that: integrates information from the large field‐of‐view projection images, the high‐temporal resolution tomosynthesisimages, and the high spatial‐resolution planning 3D CT image; has tradeoffs to balance the user's needs of quality vs. speed; and potentially extracts a daily image deformation from NST tomosynthesisimage for soft tissue based IGRT and treatment course dose accumulation. Results: We have developed the initial image registration approaches for clinical testing of the first protocol type NST IGRT system. In this presentation we will describe the proposed NST IGRTimage registration approaches in lung and prostate treatments and discuss imagingdose and imagingdose reduction approaches. Other aspects of the NST technology including system design, NST reconstruction, image registration, and imaging geometry may be presented separately in the meeting. Conclusion: We have demonstrated that Nanotube Stationary Tomosynthesistechnology has the potential to offer temporal resolution and imaging speed ‐ important features that have not been adequately addressed by the existing IGRTtechnology today. Conflict of Interest: This work is partially funded by a grant from Siemens Oncology Care System.
- Published
- 2009
27. 3D Graph Description of the Intracerebral Vasculature from Segmented MRA and Tests of Accuracy by Comparison with X-ray Angiograms
- Author
-
Suresh K. Mukherji, Guido Gerig, Elizabeth Bullitt, Jeffrey A. Stone, Stephen R. Aylward, Alan Liu, Stephen M. Pizer, and Christopher S. Coffey
- Subjects
medicine.diagnostic_test ,business.industry ,Subtraction ,Vessel segmentation ,Digital subtraction angiography ,Minimum spanning tree ,cardiovascular system ,medicine ,Graph (abstract data type) ,Computer vision ,Artificial intelligence ,Intracerebral vasculature ,business ,Parent vessel ,Mathematics - Abstract
This paper describes largely automated methods of creating connected, 3D vascular trees from individual vessels segmented from magnetic resonance angiograms. Vessel segmentation is initiated by user-supplied seed points, with automatic calculation of vessel skeletons as image intensity ridges and automatic estimation of vessel widths via medialness calculations. The tree-creation process employs a variant of the minimum spanning tree algorithm and evaluates image intensities at each proposed connection point. We evaluate the accuracy of nodal connections by registering a 3D vascular tree with 4 digital subtraction angiograms (DSAs) obtained from the same patient, and by asking two neuroradiologists to evaluate each nodal connection on each DSA view. No connection was judged incorrect. The approach permits new, clinically useful visualizations of the intracerebral vasculature.
- Published
- 1999
28. Automated Identification and Measurement of Objects via Populations of Medial Primitives, with Application to Real Time 3D Echocardiography
- Author
-
Stephen M. Pizer and George D. Stetten
- Subjects
Scale (ratio) ,Property (programming) ,Orientation (computer vision) ,business.industry ,Image (mathematics) ,medicine.anatomical_structure ,Medial axis ,Mitral valve ,medicine ,Cylinder ,Computer vision ,Artificial intelligence ,business ,Mathematics ,Curse of dimensionality - Abstract
We suggest that Identification and measurement of objects in 3D images can be automatic, rapid and stable, based on the statistical properties of populations of medial primitives sought throughout the image space. These properties include scale, orientation, endness, and medial dimensionality. The property of medial dimensionality differentiates the sphere, the cylinder, and the slab, with intermediate dimensionality also possible. Endness results at the cap of a cylinder or the edge of a slab. The values of these medial properties at just a few locations provide an intuitive and robust model for complex shape. For example, the left ventricle during systole can be described as a large cylinder with an apical cap at one end, a slab-like mitral valve at the other (closed during systole), and appropriate interrelations among components in terms of their scale, orientation, and location. We demonstrate our method on simple geometric test objects, and show it capable of automatically identifying the left ventricle and measuring its volume in vivo using Real-Time 3D echocardiography.
- Published
- 1999
29. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms
- Author
-
M. Patricia Braeuning, Shuquan Zong, Keith E. Muller, Stephen M. Pizer, Etta D. Pisano, Marla C. DeLuca, R. Eugene Johnston, and Bradley M. Hemminger
- Subjects
Computer science ,media_common.quotation_subject ,Image processing ,Article ,medicine ,Image Processing, Computer-Assisted ,Mammography ,Contrast (vision) ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Contrast level ,media_common ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Pixel ,business.industry ,Orientation (computer vision) ,Computer Science Applications ,Radiographic Image Enhancement ,Adaptive histogram equalization ,Female ,Artificial intelligence ,business - Abstract
The purpose of this project was to determine whether Contrast Limited Adaptive Histogram Equalization (CLAHE) improves detection of simulated spiculations in dense mammograms. Lines simulating the appearance of spiculations, a common marker of malignancy when visualized with masses, were embedded in dense mammograms digitized at 50 micron pixels, 12 bits deep. Film images with no CLAHE applied were compared to film images with nine different combinations of clip levels and region sizes applied. A simulated spiculation was embedded in a background of dense breast tissue, with the orientation of the spiculation varied. The key variables involved in each trial included the orientation of the spiculation, contrast level of the spiculation and the CLAHE settings applied to the image. Combining the 10 CLAHE conditions, 4 contrast levels and 4 orientations gave 160 combinations. The trials were constructed by pairing 160 combinations of key variables with 40 backgrounds. Twenty student observers were asked to detect the orientation of the spiculation in the image. There was a statistically significant improvement in detection performance for spiculations with CLAHE over unenhanced images when the region size was set at 32 with a clip level of 2, and when the region size was set at 32 with a clip level of 4. The selected CLAHE settings should be tested in the clinic with digital mammograms to determine whether detection of spiculations associated with masses detected at mammography can be improved.
- Published
- 1998
30. [Untitled]
- Author
-
E.L. Chaney, X. Fang, Robert E. Broadhurst, Joshua V. Stough, Stephen M. Pizer, Gregg Tracton, and Ja-Yeon Jeong
- Subjects
Cancer Research ,Radiation ,Oncology ,business.industry ,Medicine ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Image segmentation ,Artificial intelligence ,Likelihood function ,business ,Image (mathematics) - Published
- 2006
31. Surgical instrument guidance using synthesized anatomical structures
- Author
-
Stephen M. Pizer, Alan Liu, and Elizabeth Bullitt
- Subjects
business.industry ,Anatomical structures ,Surgical instrument ,Intraoperative procedures ,Medicine ,Context (language use) ,Parallax ,Phantom studies ,Projection (set theory) ,business ,Visualization ,Biomedical engineering - Abstract
We present a new method of intraoperative, image-based surgical instrument guidance. The method has the potential to significantly improve visualization of intraoperative procedures that currently depend on fluoroscopic images for navigation. 3D information from preoperative computed tomography (CT) is fused with images of the instrument taken intraoperatively. Our approach employs synthesized anatomical structures, a novel approach using the effect of parallax to synthesize the apparent position of anatomical structures viewed under projection. The result is a reconstruction of the instrument in the context of the CT image which greatly facilitates the understanding of instrument position and trajectory within the patient. In this paper, we show how the method can potentially be applied to surgical procedures such as percutaneous rhizotomy. Phantom studies showing our preliminary results are included.
- Published
- 1997
32. The effect of intensity windowing on the detection of simulated masses embedded in dense portions of digitized mammograms in a laboratory setting
- Author
-
R. Eugene Johnston, Deborah H. Glueck, Jayanthi Chandramouli, William F. Garrett, Stephen M. Pizer, Bradley M. Hemminger, Derek T. Puff, M. Patricia Braeuning, Keith E. Muller, and Etta D. Pisano
- Subjects
Computer science ,media_common.quotation_subject ,Image processing ,Sensitivity and Specificity ,Article ,Quadrant (plane geometry) ,Position (vector) ,Image Processing, Computer-Assisted ,medicine ,Humans ,Contrast (vision) ,Mammography ,Radiology, Nuclear Medicine and imaging ,Computer vision ,media_common ,Observer Variation ,Analysis of Variance ,Radiological and Ultrasound Technology ,Pixel ,medicine.diagnostic_test ,Phantoms, Imaging ,business.industry ,Computer Science Applications ,Intensity (physics) ,Radiographic Image Enhancement ,Female ,Artificial intelligence ,business - Abstract
The purpose of this study was to determine whether intensity windowing (IW) improves detection of simulated masses in dense mammograms. Simulated masses were embedded in dense mammograms digitized at 50 microns/pixel, 12 bits deep. Images were printed with no windowing applied and with nine window width and level combinations applied. A simulated mass was embedded in a realistic background of dense breast tissue, with the position of the mass (against the background) varied. The key variables involved in each trial included the position of the mass, the contrast levels and the IW setting applied to the image. Combining the 10 image processing conditions, 4 contrast levels, and 4 quadrant positions gave 160 combinations. The trials were constructed by pairing 160 combinations of key variables with 160 backgrounds. The entire experiment consisted of 800 trials. Twenty observers were asked to detect the quadrant of the image into which the mass was located. There was a statistically significant improvement in detection performance for masses when the window width was set at 1024 with a level of 3328. IW should be tested in the clinic to determine whether mass detection performance in real mammograms is improved.
- Published
- 1997
- Full Text
- View/download PDF
33. Towards performing ultrasound-guided needle biopsies from within a head-mounted display
- Author
-
Andrei State, Gentaro Hirota, William F. Garrett, Stephen M. Pizer, Mary C. Whitton, Henry Fuchs, Mark A. Livingston, and Etta D. Pisano
- Subjects
Breast biopsy ,medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Ultrasound ,Cyst aspiration ,Human patient ,Optical head-mounted display ,Ultrasound guided ,Needle biopsy ,Biopsy ,medicine ,Computer vision ,Radiology ,Artificial intelligence ,business - Abstract
Augmented reality is applied to ultrasound-guided needle biopsy of the human breast. In a tracked stereoscopic head-mounted display, a physician sees the ultrasound imagery “emanating” from the transducer, properly registered with the patient and the biopsy needle. A physician has successfully used the system to guide a needle into a synthetic tumor within a breast phantom and examine a human patient in preparation for a cyst aspiration.
- Published
- 1996
34. Automatic male pelvis segmentation from CT images via statistically trained multi-object deformable m-rep models
- Author
-
G. Gash, E.L. Chaney, Gregg Tracton, Derek Merck, Sarang Joshi, Robert E. Broadhurst, Joshua V. Stough, Ja-Yeon Jeong, Qiong Han, Conglin Lu, Stephen M. Pizer, and Tom Fletcher
- Subjects
Cancer Research ,Radiation ,Oncology ,Male pelvis ,business.industry ,Medicine ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Computer vision ,Artificial intelligence ,Object (computer science) ,business - Published
- 2004
35. Cores for image registration
- Author
-
Alan Liu, Stephen M. Pizer, Paren I. Shah, Daniel S. Fritsch, Edward L. Chaney, and Suraj Raghavan
- Subjects
medicine.diagnostic_test ,Chapel ,Radiation oncology ,3d image processing ,medicine ,Art history ,Computed tomography ,computer ,Cartography ,computer.programming_language - Abstract
Daniel S. Fritsch"4, Stephen M. Pizer1'2'3, Edward Chaney4, Alan Liu2, Suraj Raghavan1, Tushar Shah5'Department of Biomedical Engineering, 2Department of Computer Science, 3Department of Radiology,4Department of Radiation Oncology, 5School of MedicineThe University of North Carolina at Chapel Hill, Chapel Hill, NC 27516ABSTRACT
- Published
- 1994
36. SU-HH-BRB-12: IGRT Via Machine Learning from Limited Angle Projection Images
- Author
-
Stephen M. Pizer, Sha Chang, B Frederick, and Chen-Rui Chou
- Subjects
Cone beam computed tomography ,medicine.diagnostic_test ,Orientation (computer vision) ,Computer science ,business.industry ,Computed tomography ,General Medicine ,Machine learning ,computer.software_genre ,Tomosynthesis ,Digital Tomosynthesis Mammography ,Transformation (function) ,Medical imaging ,medicine ,Dosimetry ,Artificial intelligence ,Projection (set theory) ,business ,computer ,Image-guided radiation therapy - Abstract
Purpose: To yield accurate patient treatment setup correction from few numbers of projection images in tomosynthesis‐based IGRT approaches using cone‐beam CT(CBCT) and the novel carbon nanotube stationary tomosynthesis (NST) devices.Method and Materials: Our method uses a machine learning strategy that has a two‐stage process: training and IGRT. In the training stage we perform a patient‐specific training that samples from a range of potential patient movements, and for each such movement we generate 2D projections by transforming and re‐projecting the patient's planning CT. We compute a linear regression between the patient movements and the differences between the projections of the moved CT and those of the CT in the planning position. In the IGRT stage, the learned regression model is applied iteratively to the successive residues between the real‐time projections and those of the moving CT transformed by the previously predicted parameters. This iteration yields the predicted transformation with sub‐voxel accuracy. The re‐projection process is implemented on GPUs to speed up the real‐time image guidance that will be enabled by the carbon nanotube based stationary tomosynthesisIGRT system NST. Results: We tested our method using three patients' head‐and‐neck CTs. A total of 180 test movements were simulated with combination of both translations and rotations that were randomly picked within −2 to 2 cm and −5 to 5 degrees, respectively. The NST and the 44‐degree CBCTIGRTdevices are simulated in our tests. The mean of the vector error distance are 0.1270 cm with 12 NST image projections and 0.1116 cm with 5 image projections from the 44‐degree CBCT.Conclusion: Our method can yield accurate patient's position and orientation correction with limited projection images, which is important for image‐guidance during treatment delivery and the reduction of imagingradiationdose.Conflict of Interest: Research sponsored by Siemens Medical Solutions.
- Published
- 2010
37. A method for measuring lung shape in electronic portal images during gated treatment
- Author
-
L. Levine, Gikas S. Mageras, Stephen M. Pizer, Gregg Tracton, and E.L. Chaney
- Subjects
Cancer Research ,medicine.medical_specialty ,Radiation ,Lung ,medicine.anatomical_structure ,Oncology ,business.industry ,medicine ,Radiology, Nuclear Medicine and imaging ,Radiology ,business - Published
- 2000
38. Clinical Evaluation of M-Rep Automatic Segmentation Tool on Prostate in CT
- Author
-
Randall J. Kimple, Sha Chang, Stephen M. Pizer, X Tang, Stephen L. Harris, K. Deschene, Gregg Tracton, E.L. Chaney, and Mark Foskey
- Subjects
Cancer Research ,medicine.medical_specialty ,Radiation ,medicine.anatomical_structure ,Oncology ,Prostate ,business.industry ,medicine ,Automatic segmentation ,Radiology, Nuclear Medicine and imaging ,Radiology ,business ,Clinical evaluation - Published
- 2009
39. Radiology workstation for mammography: preliminary observations, eyetracker studies, and design
- Author
-
Etta D. Pisano, Stephen M. Pizer, R. E. Johnston, Bradley M. Hemminger, and David V. Beard
- Subjects
medicine.medical_specialty ,Multimedia ,Workstation ,medicine.diagnostic_test ,business.industry ,High resolution ,computer.software_genre ,law.invention ,Health care delivery ,law ,Medicine ,Ct technique ,Mammography ,Radiology ,business ,computer - Abstract
For the last four years, the UNC FilmPlane project has focused on constructing a radiology workstation facilitating CT interpretations equivalent to those with film and viewbox. Interpretation of multiple CT studies was originally chosen because handling such large numbers of images was considered to be one of the most difficult tasks that could be performed with a workstation. The authors extend the FilmPlane design to address mammography. The high resolution and contrast demands coupled with the number of images often cross- compared make mammography a difficult challenge for the workstation designer. This paper presents the results of preliminary work with workstation interpretation of mammography. Background material is presented to justify why the authors believe electronic mammographic workstations could improve health care delivery. The results of several observation sessions and a preliminary eyetracker study of multiple-study mammography interpretations are described. Finally, tentative conclusions of what a mammographic workstation might look like and how it would meet clinical demand to be effective are presented.
- Published
- 1991
40. Agreement experiments: a method for quantitatively testing new medical image display approaches
- Author
-
Stephen M. Pizer, Leonard A. Parker, David J. Delany, Bonnie C. Yankaskas, J. R. Perry, and R. E. Johnston
- Subjects
business.industry ,Observer performance ,Chest ct ,Medicine ,Pattern recognition ,Artificial intelligence ,business ,Standard technique ,Equivalence (measure theory) ,Image display ,Simulation - Abstract
New medical image display devices or processes are commonly evaluated by anecdotal reports or subjective evaluations which are informative and relatively easy to acquire but do not provide quantitative nieasures. On the other hand, experinients eniploying ROC analysis, yield quantitative measurements but are very laborious and demand pathological proof of outcome. We have designed and are employing a new approach, which we have termed "agreement experiments," to quantitatively test the equivalence of observer performance on two systems. This was specifically developed to test whether a radiologist using a new display technique, which has some clear advantages over the standard technique, will detect and interpret diagnostic signs as he would with the standard display technique. Agreement experiments use checklists and confidence ratings to measure how well two radiologists agree on the presence of diagnostic signs when both view images on the standard display. This yields a baseline measure of agreement. Agreement measurements are then obtained when the two radiologists view cases using the new display, or display method, compared to the standard technique. If the levels of agreement when one reads from the new and one reads from the standard display are not statistically different from the baseline measures of agreement, we conclude the two systems are equivalent in conveying diagnostic signs. We will report on an experiment using this test. The experiment compares the agreement of radiological findings for chest CT studies viewed on the conventional multiformat film/lightbox to agreement of radiological findings from chest CT images presented on a multiple screen video system. The study consists of 80 chest CT studies. The results were an 86% to 81% agreement between the two viewing modalities which fell within our criteria of showing agreement.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1990
41. Prostate and Bladder Segmentation Using a Statistically Trainable Model
- Author
-
Joshua V. Stough, Xiaoxiao Liu, Gregg Tracton, Robert E. Broadhurst, Joshua Levy, Ja-Yeon Jeong, E.L. Chaney, and Stephen M. Pizer
- Subjects
Cancer Research ,medicine.medical_specialty ,Radiation ,medicine.anatomical_structure ,Oncology ,business.industry ,Prostate ,Medicine ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Radiology ,business - Published
- 2007
42. Mask: A portable software tool combining multiple methods for efficient definition of anatomy and target volumes from CT scans
- Author
-
Stephen M. Pizer, K. Deschesne, E.L. Chaney, Gregg Tracton, and Julian G. Rosenman
- Subjects
Cancer Research ,Radiation ,Oncology ,business.industry ,Software tool ,Planning target volume ,Medicine ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Artificial intelligence ,Multiple methods ,business - Published
- 1994
43. Comparison of automatic and human segmentation of kidneys from CT images
- Author
-
Stephen M. Pizer, Sarang Joshi, E.L. Chaney, M Rao, James Z. Chen, and Gregg Tracton
- Subjects
Cancer Research ,Radiation ,Oncology ,business.industry ,Medicine ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Segmentation ,Artificial intelligence ,business - Published
- 2002
44. Pablo: clinical prototype software for automatic image segmentation of normal anatomical structures using medially based deformable models
- Author
-
Gregg Tracton, Stephen M. Pizer, P.T. Fletcher, Sarang Joshi, E.L. Chaney, Andrew Thall, and A.G. Gash
- Subjects
Cancer Research ,Radiation ,Prototype software ,Oncology ,business.industry ,Anatomical structures ,Medicine ,Scale-space segmentation ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Image segmentation ,Artificial intelligence ,business - Published
- 2002
45. A novel image analysis approach for verifying MLC leaf positions in electronic portal images
- Author
-
Stephen M. Pizer, E.L. Chaney, Gregg Tracton, L.D. Potter, and L. Levine
- Subjects
Cancer Research ,Radiation ,Oncology ,business.industry ,Medicine ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Artificial intelligence ,business ,Image (mathematics) - Published
- 2000
46. 58 A probabilistic approach using deformable organ models for automatic definition of normal anatomical structures for 3D treatment planning
- Author
-
Liyun Yu, Matthew J. McAuliffe, Stephen M. Pizer, Edward L. Chaney, Daniel S. Fritsch, and Valen E. Johnson
- Subjects
Cancer Research ,Radiation ,Automatic transmission ,business.industry ,Anatomical structures ,Probabilistic logic ,Machine learning ,computer.software_genre ,law.invention ,Oncology ,law ,Medicine ,Radiology, Nuclear Medicine and imaging ,Artificial intelligence ,Radiation treatment planning ,business ,computer - Published
- 1996
47. Three-Dimensional High-Resolution Volume Rendering (HRVR) of Computed Tomography Data
- Author
-
Stephen M. Pizer, Henry Fuchs, Andrew L. Skinner, Harold C. Pillsbury, Julian G. Rosenman, Richard E. Davis, and Marc Levoy
- Subjects
Male ,medicine.medical_specialty ,High resolution ,Pilot Projects ,Computed tomography ,Data loss ,Imaging data ,Image Processing, Computer-Assisted ,medicine ,Humans ,Computer vision ,Image resolution ,Aged ,medicine.diagnostic_test ,business.industry ,Infant, Newborn ,Infant ,Volume rendering ,Middle Aged ,Surgery ,Otorhinolaryngology ,Child, Preschool ,Head and neck surgery ,Female ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Head ,Neck - Abstract
Conventional computed tomographic display formats are not optimal for demonstrating three-dimensional anatomic relationships. In otolaryngology--head and neck surgery these critical relationships are often highly complex, and their complete understanding is essential to a successful surgical outcome. A new computer-generated image display format, high-resolution volume rendering (HRVR), facilities the understanding of these critical anatomic relationships by transforming conventional imaging data into clinically relevant 3-D images. Unlike many other 3-D reconstruction algorithms, HRVR suffers minimal data loss in the conversion process, which in turn provides for superior image resolution. This better allows the application of 3-D technology to small or complicated anatomic structures such as those frequently encountered in otolaryngology--head and neck surgery. Advances in computer-controlled manipulations that further enhance the evaluation of desired pathologic features have been achieved. This pilot study contains representative clinical cases chosen to illustrate the potential utility of HRVR in otolaryngology--head and neck surgery. The authors believe HRVR images will enhance the surgeon's understanding of the 3-D anatomic relationships that exist between critical pathologic features and surrounding vital structures.
- Published
- 1991
48. Quantitative Digital Fluorography
- Author
-
F. J. Kohout, David J. Delany, Lawrence Mark Lifshitz, Stephen M. Pizer, F. A. DiBianca, and Paul F. Jaques
- Subjects
medicine.medical_specialty ,Orientation (computer vision) ,Single factor ,% area reduction ,General Medicine ,medicine.disease ,Computer algorithm ,Stenosis ,Tilt (optics) ,Beam hardening ,medicine ,Radiology, Nuclear Medicine and imaging ,Radiology ,Vascular Stenosis ,Mathematics ,Biomedical engineering - Abstract
Digital subtracted images of iodinated rods, incorporating accurately measured areas of eccentric stenosis were obtained. The average subjective estimations of fractional area reduction (211 readings each) by three experienced angiographers were compared with values obtained using an interactive computer algorithm using the densitometric data. Results were analyzed to identify major influences on accuracy, including iodine concentration, vessel width, absolute severity of the stenosis, and vessel orientation. While no single factor appeared to seriously affect human accuracy, computer readings were significantly influenced by the tilt of the vessel in relation to the x-ray beam. Various potential sources of error including beam hardening, stenosis geometry, and scattering are discussed and appropriate algorithm corrections suggested. The importance of the availability of a reliable and accurate method to quantify vascular stenosis and volume of stenotic material is stressed.
- Published
- 1985
49. An evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement
- Author
-
Stephen M. Pizer, Edward V. Staab, B. C. Brenton, J. R. Perry, John B. Zimmerman, and W. H. McCartney
- Subjects
Contrast enhancement ,Radiological and Ultrasound Technology ,Observer (quantum physics) ,business.industry ,media_common.quotation_subject ,Image processing ,Luminance ,Computer Science Applications ,Intensity (physics) ,Psychophysics ,Medicine ,Contrast (vision) ,Adaptive histogram equalization ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software ,media_common - Abstract
Adaptive histogram equalization (AHE) and intensity windowing have been compared using psychophysical observer studies. Experienced radiologists were shown clinical CT (computerized tomographic) images of the chest. Into some of the images, appropriate artificial lesions were introduced; the physicians were then shown the images processed with both AHE and intensity windowing. They were asked to assess the probability that a given image contained the artificial lesion, and their accuracy was measured. The results of these experiments show that for this particular diagnostic task, there was no significant difference in the ability of the two methods to depict luminance contrast; thus, further evaluation of AHE using controlled clinical trials is indicated. >
- Published
- 1988
50. Automatic digital contrast enhancement of radiotherapy films
- Author
-
Stephen M. Pizer, Edward L. Chaney, George W. Sherouse, Julian G. Rosenman, and Harris L. McMurry
- Subjects
Cancer Research ,medicine.medical_specialty ,Contrast enhancement ,media_common.quotation_subject ,medicine.medical_treatment ,Radiography ,Brachytherapy ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,High resolution ,Low contrast ,medicine ,Humans ,Contrast (vision) ,Radiology, Nuclear Medicine and imaging ,Projection (set theory) ,Head and neck ,Pelvic Neoplasms ,media_common ,Radiation ,Radiotherapy ,business.industry ,X-Ray Film ,Radiographic Image Enhancement ,Radiation therapy ,Oncology ,Head and Neck Neoplasms ,Data Display ,Radiology ,business ,Biomedical engineering - Abstract
The practice of radiotherapy involves the precise geometric localization of both anatomic and non-anatomic structures using radiographs which are typically of very low contrast. Portal and verification films suffer from poor contrast as a result of the dominance of Compton interactions at therapeutic energies, and implant localization films often are degraded by extreme patient thickness (lateral pelvis) or projection of bony structures (head and neck). Automatic contrast enhancement techniques developed and proven for optimization of the display of digitally produced images such as CT have been applied to radiotherapy films to improve contrast and augment readability. This approach has become viable only recently with the advent of high speed, high resolution film digitizers and laser cameras and the evolution of sufficiently powerful computer hardware.
- Published
- 1987
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.