Back to Search Start Over

On the possibility to achieve 6-DoF for 360 video using divergent multi- view content

Authors :
Mohamed-Chaker Larabi
Joel Jung
Bappaditya Ray
Université de Poitiers
Synthèse et analyse d'images (XLIM-ASALI)
XLIM (XLIM)
Université de Limoges (UNILIM)-Centre National de la Recherche Scientifique (CNRS)-Université de Limoges (UNILIM)-Centre National de la Recherche Scientifique (CNRS)
Orange Labs [Issy les Moulineaux]
France Télécom
Source :
EUSIPCO, European Signal Processing Conference (EUSIPCO), European Signal Processing Conference (EUSIPCO), Sep 2018, Roma, Italy. pp.211-215, ⟨10.23919/EUSIPCO.2018.8553397⟩
Publication Year :
2018
Publisher :
IEEE, 2018.

Abstract

International audience; With the rapid emergence of various 360 video capturing devices and head-mounted displays, providing immersive experience using 360 videos is becoming a topic of paramount interest. The current techniques face motion sickness issues, resulting in low quality of experience. One of the reason is that they do not take advantage of the parallax between the divergent views, which may be helpful to provide the correct view according to the user's head motion. In this paper, we propose to get rid of the classical ERP representation, and to synthesize arbitrary views using different divergent views together with their corresponding depths. Thus, we can exploit the parallax between the divergent views. In this context, we assess the feasibility of the depth estimation and the view synthesis using state-of-the-art techniques. Simulation results confirmed the feasibility of such a proposal, in addition to possibility to achieve sufficient visual quality for a head motion up to 0.1m from the rig, when using generated depth map for view synthesis.

Details

Database :
OpenAIRE
Journal :
2018 26th European Signal Processing Conference (EUSIPCO)
Accession number :
edsair.doi.dedup.....98830ad9f2a6f07ecee9f808b991708e
Full Text :
https://doi.org/10.23919/eusipco.2018.8553397