1. Towards Efficient 3D Calibration for Different Types of Multi-view Autostereoscopic 3D Displays
- Author
-
Y. Q. Guan, Andrei State, Xinxing Xia, Tat-Jen Cham, Henry Fuchs, School of Computer Science and Engineering, Computer Graphics International 2018 (CGI 2018), and Institute for Media Innovation (IMI)
- Subjects
3D Calibration ,Pixel ,business.industry ,Computer science ,Image quality ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Stereo display ,01 natural sciences ,Ray ,010309 optics ,Gray code ,Crosstalk ,Autostereoscopy ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Calibration ,Computer science and engineering [Engineering] ,Computer vision ,Artificial intelligence ,business ,Autostereoscopic Displays - Abstract
A novel and efficient 3D calibration method for different types of autostereoscopic multi-view 3D displays is presented in this paper. In our method, a camera is placed at different locations within the viewing volume of a 3D display to capture a series of images that relate to the subset of light rays emitted by the 3D display and arriving at each of the camera positions. Gray code patterns modulate the images shown on the 3D display, helping to significantly reduce the number of images captured by the camera and thereby accelerate the process of calculating the correspondence relationship between the pixels on the 3D display and the locations of the capturing camera. The proposed calibration method has been successfully tested on two different types of multi-view 3D displays and can be easily generalized for calibrating other types of such displays. The experimental results show that this novel 3D calibration method can also be used to improve the image quality by reducing the frequently observed crosstalk that typically exists when multiple users are simultaneously viewing multi-view 3D displays from a range of viewing positions. NRF (Natl Research Foundation, S’pore)
- Published
- 2018
- Full Text
- View/download PDF