13 results on '"Yi-ping Hung"'
Search Results
2. Quantitative spectroscopic comparison of the optical properties of the mouse cochlear microstructures using optical coherence tomography at 1.3 µm and 1 µm wavelength regimes (Conference Presentation)
- Author
-
Meng-Tsan Tsai, Yin-Peng Huang, Hao Wang, Chih-Hung Wang, Hsiang-Chieh Lee, Ting Hao Chen, Ting-Yen Tsai, Yi-Ping Hung, Hsin-Chien Chen, Teng-Chieh Chang, and Yu-Wei Chang
- Subjects
Wavelength ,Materials science ,Optics ,Optical coherence tomography ,medicine.diagnostic_test ,business.industry ,medicine ,Microstructure ,business - Published
- 2020
- Full Text
- View/download PDF
3. Development of a puzzle-box game with haptic feedback
- Author
-
Yoshimasa Tokuyama, Tsubasa Miyazato, Kouichi Konno, R.P.C. Janaka Rajapakse, and Yi-Ping Hung
- Subjects
Development (topology) ,Computer science ,Human–computer interaction ,Haptic technology - Published
- 2019
- Full Text
- View/download PDF
4. Study of a viewer tracking system with multiview 3D display
- Author
-
Chuan-Heng Hsiao, Yi-Ping Hung, Jinn-Cherng Yang, Wen-Chieh Liu, Chang-Shuo Wu, and Ming-Chieh Yang
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Stereoscopy ,Tracking system ,Frame rate ,Stereo display ,law.invention ,Rendering (computer graphics) ,Visualization ,law ,Computer graphics (images) ,Autostereoscopy ,Computer vision ,Artificial intelligence ,Face detection ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.
- Published
- 2008
- Full Text
- View/download PDF
5. Generation of binocular object movies from monocular object movies
- Author
-
Yi-Ping Hung, Wan-Yen Lo, Ying-Ruei Chen, and Yu-Pao Tsai
- Subjects
Monocular ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Representation (systemics) ,Stereoscopy ,Image processing ,Virtual reality ,Object (computer science) ,law.invention ,Stereopsis ,law ,Computer graphics (images) ,Computer vision ,Augmented reality ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Object movie (OM) is a popular technique for producing interactive 3D artifacts because of its simplicity in production and it photo-realistic ability to present the artifacts. At the same time, many stereoscopic vision techniques are developed for a variety of applications. However, the traditional approach for generating binocular object movies require duplicate effort compared with monocular ones both in the process of acquisition and image processing. Therefore, we propose a framework to generate stereo OMs from monocular ones with the help of an automatically constructed 3D model from the monocular OM. Here, a new representation of the 3D model, named billboard clusters, is proposed for efficient generating binocular views. In order to obtain better results, a novel approach to extract view-independent texture is developed in this work. Besides, billboard clusters can be used to compress the storage capacity of OMs, and to perform relighting so that the binocular OMs can be well augmented into virtual environments with different lighting conditions. This paper describes the methods in detail and reports on its wide applications.
- Published
- 2007
- Full Text
- View/download PDF
6. Disparity-based view interpolation for multiple-perspective stereoscopic displays
- Author
-
Yung-Chieh Lin, Yi-Ping Hung, Ho-Chao Huang, and Ching-Che Kao
- Subjects
Video post-processing ,Pixel ,Computer science ,business.industry ,Template matching ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Stereoscopy ,law.invention ,law ,Video tracking ,Segmentation ,Computer vision ,Artificial intelligence ,Multiview Video Coding ,business ,Interpolation ,Data compression - Abstract
This paper presents a new technique for generating multiview video form a two-view video sequence. For each stereo frame in the two-view video sequence, our system first estimates the corresponding point of each pixel by template matching, and then constructs the disparity maps required for view interpolation. To generate accurate disparity maps, we use adaptive-template matching, where the template size depends on local variation of image intensity and the knowledge of object boundary. Then, both the disparity maps and the original stereo videos are compressed to reduce the storage size and the transfer time. Based on the disparity, our system can generate, in real time, a stereo video of desired perspective view by interpolation or extrapolation from the original views, in response to the head movement of the user. Compared to the traditional method of capturing multiple perspective video directly, the approach of view interpolation can eliminate the problems caused by the requirement of synchronizing multiple video inputs and the large amount of video data needed to be stored and transferred.
- Published
- 2000
- Full Text
- View/download PDF
7. New video object segmentation technique based on flow-thread features for MPEG-4 and multimedia systems
- Author
-
Yung-Chieh Lin, Ho-Chao Huang, Chiou-Shann Fuh, and Yi-Ping Hung
- Subjects
Contextual image classification ,Pixel ,Multimedia ,business.industry ,Segmentation-based object categorization ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-space segmentation ,Image processing ,Image segmentation ,computer.file_format ,computer.software_genre ,Computer Science::Computer Vision and Pattern Recognition ,Motion estimation ,MPEG-4 ,Segmentation ,Computer vision ,Artificial intelligence ,Range segmentation ,business ,computer - Abstract
Ill this paper, we present a novel technique for video object (VO) segmentation. Compared to the existing VU segmentation methods, our method has the advantage that it does not decompose the VU segmentation problem intoan initial image segmentation problem (segmenting a single image frame) followed by a temporal tracking problem.Instead, motion information contained in a finite duration of the image sequence is considered simultaneously. Given a video sequence, our method first estimates motion vectors between consecutive images, and then constructs the flow-thread for each pixel based on the estimated motion vectors. Here, a flow-thread is a series of pixels obtained bytracing the motion vectors along the image sequence. Next, we extract a set of flow-thread features (ft-features) from each flow-thread, which is then used to classify the associated pixel into the VU it belongs to. The segmentationresults obtained by our unsupervised method look promising and the processing speed is fast enough for practical
- Published
- 2000
- Full Text
- View/download PDF
8. Reconstruction of complete 3D object model from multiview range images
- Author
-
Chu-Song Chen, Yi-Ping Hung, Chiou-Shann Fuh, and Ing-Bor Hsieh
- Subjects
Marching cubes ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image registration ,Image processing ,Volume rendering ,Iterative reconstruction ,RANSAC ,3D modeling ,Computer Science::Graphics ,Computer Science::Computer Vision and Pattern Recognition ,Triangle mesh ,Object model ,Computer vision ,Polygon mesh ,Triangulation ,Artificial intelligence ,business ,Texture mapping ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we designed and implemented a method which can register and integrate range images obtained from different view points for building complete 3D object models. This method contains three major parts: (1) registration of range images and estimation of the parameters of rigid-body transformations, (2) integration of redundant surface patches and generation of triangulated mesh surface models, and (3) reduction of triangular mesh and texture mapping. We developed the RANSAC-based DARCES technique to estimate the parameters of the rigid-body transformations between two partially-overlapping range images without requiring initial estimates. Then, we used a circular-ICP procedure to reduce the global registration error. We also used the consensus surface algorithm combined with the marching cube method to generate triangular meshes. Finally, by texture blending and mapping, we can then reconstruct a virtual 3D model containing both geometrical and texture information.
- Published
- 1999
- Full Text
- View/download PDF
9. Kinematic calibration of a binocular head using stereo vision with the complete and parametrically continuous model
- Author
-
Yi-Ping Hung, Sheng-Wen Shih, Jia-Sheng Jin, and Kuo-Hua Wei
- Subjects
Stereopsis ,Positioning system ,business.industry ,Computer science ,Calibration ,Computer vision ,Kinematics ,Artificial intelligence ,Revolute joint ,3D modeling ,business ,Computer stereo vision ,Stereo camera - Abstract
This paper describes the process of calibrating the kinematic model for an active binocular head having four revolute joints and two prismatic joints. We use the complete and parametrically continuous (CPC) model proposed by Zhuang and Roth in 1990 to model the motorized head (or camera positioning system), and use a closed form solution to identify its CPC kinematic parameters. The calibration procedure is divided into two stages. In the first stage, the two cameras are replaced by two end-effector calibration plates each having nine circles. The two removed cameras can be used to build a stereo vision system for observing the varying positions and orientations of the end-effector calibration plates when moving the joints of the head. The positions and orientations of the calibration plates, or equivalently, of the end-effectors, can be determined from the stereo measurements. The acquired data are then used to calibrate the kinematic parameters. In the second stage, the cameras are remounted to the IIS-head, and a method proposed by Tsai is used to calibrate the hand-eye relation. Once the above kinematics calibration is done, the binocular head can be controlled to gaze or track 3-D targets.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1992
- Full Text
- View/download PDF
10. Efficient and accurate camera calibration technique for 3-D computer vision
- Author
-
Sheng-Wen Shih, Yi-Ping Hung, and We-Song Lin
- Subjects
Computer science ,Camera matrix ,business.industry ,Machine vision ,Distortion (optics) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Optical axis ,Camera auto-calibration ,Focal length ,Pinhole camera model ,Computer vision ,Artificial intelligence ,business ,Camera resectioning - Abstract
In this paper we propose a new technique for calibrating a camera with very high accuracy and low computational cost. The geometric camera parameters considered include camera position, orientation, focal length, radial lens distortion, pixel size, and optical axis piercing point. With our method, the camera parameters to be estimated are divided into two parts: the radial lens distortion coefficient (kappa) and a composite parameter vector c composed of all the above geometric camera parameters other than (kappa) . Instead of using nonlinear optimization techniques, the estimation of (kappa) is transformed into an eigenvalue problem of a 8 X 8 matrix. Our method is fast since it requires only linear computation. It is accurate since the effect of the lens distortion is considered and all the information contained in the calibration points is used. Computer simulation and real experiment have shown that the performance of our calibration method is better than that of the well-known method proposed by Tsai.
- Published
- 1992
- Full Text
- View/download PDF
11. Fast image segmentation by sliding in the derivative terrain
- Author
-
Yi-Ping Hung and Xu Yao
- Subjects
Machine vision ,business.industry ,Segmentation-based object categorization ,sports ,Scale-space segmentation ,Image processing ,Image segmentation ,Geography ,Region growing ,sports.sport ,Computer vision ,Segmentation ,Artificial intelligence ,Tobogganing ,business - Abstract
Image segmentation is one of the most important problems in computer vision. Recently, Fairfield proposed an interesting approach to image segmentation using toboggan enhancement followed by naive contrast segmentation, which is a noniterative, linear execution time method. The way it operates can be thought of as a man tobogganing in the first derivative terrain, i.e., the graph surface of a discontinuity measure computed by the first derivative of the image intensity. The segmentation results it produced appeared equal in quality to that of other complex optimal region growing methods. In this paper, an improved version of Fairfield's method, called keep-sliding toboggan segmentation, is presented. With our method, the toboggan will keep sliding on a plane in the derivative terrain, where the original toboggan method will stop sliding. Therefore, our method produces far less regions than the original. Other improvements achieved are as follows: Instead of being followed by contrast segmentation post-process, our keep-sliding tobogganing process is preceded by a prefiltering process which suppresses small fluctuations in the first derivative terrain. Because of this prefiltering operation, our tobogganing process can automatically merge the regions having small intercontrast. Also, a new discontinuity measure is proposed to allow the detection of small target regions without ever-segmenting the images. Experimental results indicate that the segmentations produced by the keep-sliding toboggan method are less noisy, and, therefore, it is more appropriate to use them as initial segmentations for higher level image segmentation techniques.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1992
- Full Text
- View/download PDF
12. Maximum a-posteriori probability 3-D surface reconstruction using multiple intensity images directly
- Author
-
David B. Cooper and Yi-Ping Hung
- Subjects
Markov random field ,business.industry ,Computer science ,Estimation theory ,Posterior probability ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Pattern recognition ,Maximum a posteriori estimation ,Artificial intelligence ,business ,Likelihood function ,Correspondence problem ,Surface reconstruction ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Reconsiructing 3D surfaces using multiple intensity-images is an important problem in computer vision. Most approaches for this problem require either finding the 2D features correspondence or estimating the optical flow first. The probabilistic model-based approach shown in this paper utilizes the intensity data directly (i.e., no feature extraction) for reconstructing 3D surfaces without first solving the correspondence problem or estimating optical flow explicitly. We model 3D objects as surface patches where each patch is described as a function known up to the values of a few parameters. Surface reconstruction is then treated as the problem of parameter estimation based on two or more images taken by a moving camera. By constructing the likelihood function for the surface parameters and modeling prior knowledge about 3D surfaces with a Markov random field, we are able to compute the maximum posterior probability 3D surface reconstruction based on the observed images. This paper presents some experimental results based on a sequence of intensity images taken by a moving camera. Our approach has the advantages of: (i) directly estimating shape for surface patches, thus making object recognition simpler; (ii) formally incorporating prior knowledge about 3D surfaces; (iii) being highly parallel in required computation, hence promising for real-time operation; (iv) producing optimal accuracy in a probabilistic sense; (v) being algorithmically simple; (vi) being robust with real data.
- Published
- 1990
- Full Text
- View/download PDF
13. A Simple Real-Time Method For Calibrating A Camera Mounted On A Robot For Three Dimensional Machine Vision
- Author
-
Yi-Ping Hung
- Subjects
Robot calibration ,Machine vision ,Computer science ,business.industry ,Distortion (optics) ,Coordinate system ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Camera auto-calibration ,Calibration ,Robot ,Computer vision ,Artificial intelligence ,business ,Camera resectioning - Abstract
For a vision system to infer 3D object features (e.g., point features or line features) from 2D image features and to predict 2D image features from 3D object features, the transformation between the 2D image coordinate system and the 3D object coordinate system must be known. Determining this transformation is called (geometric) camera calibration. This 2D-3D transformation varies when the camera is moved by a robot. This paper presents a simple two-stage (off-line stage and on-line stage) method for calibrating a camera mounted on a robot. The off-line stage includes off-line calibration of the camera, the robot, and the robot-camera (or hand-eye) relation. The on-line stage can be divided into an initial calculation step and a re-calibration step. The initial step is computationally simple since it only involves one or two matrix multiplications of dimension less than four. The derived calibration accuracy from this initial step depends on the accuracy of the off-line camera/robot calibration. Experimental results show that the calibration accuracy is better than "1 part in 1000" without special robot calibration. Higher accuracy can be obtained with more sophisticated robot calibration, or with the re-calibration step using the extended Kalman filtering techniques. Some new insights to the conventional camera calibration methods are also given.
- Published
- 1989
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.