1. Depth Recovery for Kinect Sensor Using Contour-Guided Adaptive Morphology Filter
- Author
-
Yudong Guan, Chunli Ti, Teng Yidan, and Guodong Xu
- Subjects
Pixel ,Computer science ,business.industry ,Structuring element ,010401 analytical chemistry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Ranging ,02 engineering and technology ,Filter (signal processing) ,01 natural sciences ,0104 chemical sciences ,Set (abstract data type) ,Feature (computer vision) ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Consumer-grade RGB-D cameras, such as Kinect sensors, can provide support for much more real-time tasks of 3-D vision than game controllers. However, the inherent depth degradations caused by their infrared ranging will constrain their application potential, but can hardly be avoided through the improvement of the sensor design. Therefore, in this paper, we proposed a contours-guided shape-adaptive morphology filter to efficiently recover the depth of Kinect sensors. First, we put forward a statistical concept to quantitatively evaluate the texture richness of imaging sensors’ data and verify the applicability of morphology filtering on both Kinect 1 and 2 depth data. Then, considering the significance of the semantic contours, a multiresolution RGB-D contour extraction method is introduced to suppress the texture inside objects. Therewith, shape-adaptive structuring element (SASE) for each missing or untrusted depth pixel is created in terms of the contour guidance and the feature of human visual system. Efficient and accurate depth recovery can be finally achieved by combining morphology filtering and the obtained SASEs. Experiments on simulated data set, real Kinect 1, and Kinect 2 data show that our method performs better than many competing state-of-the-art approaches, and avoids the blurring around depth discontinuities.
- Published
- 2017
- Full Text
- View/download PDF