29 results on '"Caroline Baillard"'
Search Results
2. AR-Bot, a Centralized AR-based System for Relocalization and Home Robot Navigation.
- Author
-
Matthieu Fradet, Caroline Baillard, Vincent Alleaume, Pierrick Jouet, Anthony Laurent, and Tao Luo
- Published
- 2021
- Full Text
- View/download PDF
3. AR-Chat: an AR-based instant messaging system.
- Author
-
Pierrick Jouet, Vincent Alleaume, Anthony Laurent, Matthieu Fradet, Tao Luo, and Caroline Baillard
- Published
- 2020
- Full Text
- View/download PDF
4. A Multi-resolution Approach for Color Correction of Textured Meshes.
- Author
-
Mohammad Rouhani, Matthieu Fradet, and Caroline Baillard
- Published
- 2018
- Full Text
- View/download PDF
5. MR TV Mozaik: A New Mixed Reality Interactive TV Experience.
- Author
-
Matthieu Fradet, Caroline Baillard, Anthony Laurent, Tao Luo, Philippe Robert, Vincent Alleaume, Pierrick Jouet, and Fabien Servant
- Published
- 2017
- Full Text
- View/download PDF
6. Introduction to AR-Bot, an AR system for robot navigation.
- Author
-
Vincent Alleaume, Caroline Baillard, Matthieu Fradet, Pierrick Jouet, Anthony Laurent, and Tao Luo
- Published
- 2020
- Full Text
- View/download PDF
7. Efficient texture mapping via a non-iterative global texture alignment.
- Author
-
Mohammad Rouhani, Matthieu Fradet, and Caroline Baillard
- Published
- 2020
8. Probeless and Realistic Mixed Reality Application in Presence of Dynamic Light Sources.
- Author
-
Salma Jiddi, Philippe Robert, Anthony Laurent, Matthieu Fradet, Pierrick Jouet, Caroline Baillard, and éric Marchand
- Published
- 2018
- Full Text
- View/download PDF
9. Gable Roof Detection in Terrestrial Images.
- Author
-
Vincent Brandou and Caroline Baillard
- Published
- 2011
- Full Text
- View/download PDF
10. Multi-device mixed reality TV: a collaborative experience with joint use of a tablet and a headset.
- Author
-
Caroline Baillard, Matthieu Fradet, Vincent Alleaume, Pierrick Jouet, and Anthony Laurent
- Published
- 2017
- Full Text
- View/download PDF
11. Mixed Reality Extended TV.
- Author
-
Caroline Baillard, Vincent Alleaume, Matthieu Fradet, Pierrick Jouet, Anthony Laurent, Tao Luo, Philippe Robert, and Fabien Servant
- Published
- 2016
- Full Text
- View/download PDF
12. Automatic Reconstruction of Piecewise Planar Models from Multiple Views.
- Author
-
Caroline Baillard and Andrew Zisserman
- Published
- 1999
- Full Text
- View/download PDF
13. Light4AR: a Shadow-based Estimator of Multiple Light Sources in Interactive Time for More Photorealistic AR Experiences
- Author
-
Caroline Baillard, Matthieu Fradet, Anthony Laurent, Pierrick Jouet, and Patrice Hirtzlin
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Object (computer science) ,Human-centered computing ,User experience design ,Position (vector) ,Shadow ,Augmented reality ,Point (geometry) ,Computer vision ,Artificial intelligence ,business ,Mobile device ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present Light4AR, a light source estimation solution based on the detection of real cast shadows in an image captured by a mobile device. Given the camera pose of the image and a local 3D model of the scene, the approach consists in analyzing shadows cast by a reference object onto its supporting plane to determine the 3D position and the intensity of multiple light sources. By creating virtual point lights based on the resulting parameters and adding them to the AR scene, all the virtual objects can be illuminated and cast virtual shadows consistent with the real environment lighting, thereby enhancing user experience and object presence. In addition to offering the ability to share results over multiple users while preserving device resources, the server-based GPU implementation provides results in interactive time and makes photorealistic AR experiences accessible to most mobile devices, not limiting them to recent ones or specific OS. The proposed approach requires small manual input, limited to placing in the scene a reference object of known geometry, then selecting a region including the shadows cast by this object. We show the potential of this approach on several challenging scenes with various lighting configurations and background textures.
- Published
- 2021
- Full Text
- View/download PDF
14. Segmentation of urban scenes from aerial stereo imagery.
- Author
-
Caroline Baillard, Olivier Dissard, and Henri Maître
- Published
- 1998
- Full Text
- View/download PDF
15. 3-D Reconstruction of Urban Scenes from Aerial Stereo Imagery: A Focusing Strategy.
- Author
-
Caroline Baillard and Henri Maître
- Published
- 1999
- Full Text
- View/download PDF
16. AR-Bot, a Centralized AR-based System for Relocalization and Home Robot Navigation
- Author
-
Tao Luo, Pierrick Jouet, Caroline Baillard, Vincent Alleaume, Matthieu Fradet, and Anthony Laurent
- Subjects
Human–computer interaction ,Computer science ,Home robot - Published
- 2021
- Full Text
- View/download PDF
17. Detection of Removed Objects in 3D Meshes Using Up-to-Date Images for Mixed-Reality Applications
- Author
-
Caroline Baillard, Guillaume Moreau, Olivier Roupin, Matthieu Fradet, InterDigital R&D France, Département Informatique (IMT Atlantique - INFO), IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Equipe Immersive Natural User Interaction team (Lab-STICC_INUIT), Laboratoire des sciences et techniques de l'information, de la communication et de la connaissance (Lab-STICC), École Nationale d'Ingénieurs de Brest (ENIB)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-École Nationale d'Ingénieurs de Brest (ENIB)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), and Institut Mines-Télécom [Paris] (IMT)
- Subjects
3D model ,Computer Networks and Communications ,Computer science ,lcsh:TK7800-8360 ,projection ,02 engineering and technology ,foreground object ,0202 electrical engineering, electronic engineering, information engineering ,Polygon mesh ,Computer vision ,Electrical and Electronic Engineering ,Projection (set theory) ,change detection ,mixed reality ,business.industry ,lcsh:Electronics ,Process (computing) ,020207 software engineering ,Object (computer science) ,Mixed reality ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,image sequence ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,020201 artificial intelligence & image processing ,Artificial intelligence ,occluding object ,business ,Change detection - Abstract
Precise knowledge of the real environment is a prerequisite for the integration of the real and virtual worlds in mixed-reality applications. However, real-time updating of a real environment model is a costly and difficult process, therefore, hybrid approaches have been developed: An updated world model can be inferred from an offline acquisition of the 3D world, which is then updated online using live image sequences under the condition of developing fast and robust change detection algorithms. Current algorithms are biased toward object insertion and often fail in object removal detection, in an environment where there is uniformity in the background—in color and intensity—the disappearances of foreground objects between the 3D scan of a scene and the capture of several new pictures of said scene are difficult to detect. The novelty of our approach is that we circumvent this issue by focusing on areas of least change in parts of the scene that should be occluded by the foreground. Through experimentation on realistic datasets, we show that this approach results in better detection and localization of removed objects. This technique can be paired with an insertion detection algorithm to provide a complete change detection framework.
- Published
- 2021
- Full Text
- View/download PDF
18. Introduction to AR-Bot, an AR system for robot navigation
- Author
-
Caroline Baillard, Vincent Alleaume, Anthony Laurent, Tao Luo, Pierrick Jouet, and Matthieu Fradet
- Subjects
0209 industrial biotechnology ,Ar system ,business.industry ,Computer science ,Robot controller ,02 engineering and technology ,Camera phone ,020901 industrial engineering & automation ,3d space ,Phone ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Augmented reality ,Computer vision ,Artificial intelligence ,User interface ,business - Abstract
We introduce a system to assign navigation tasks to a self-moving robot using an Augmented Reality (AR) application running on a smartphone. The system relies on a robot controller and a central server hosted on a PC. The user points at a target location in the phone camera view and the robot moves accordingly. The robot and the phone are independently located in the 3D space thanks to registration methods running on the server, hence they do not need to be spatially registered to each other nor in direct line of sight.
- Published
- 2020
- Full Text
- View/download PDF
19. Probeless and Realistic Mixed Reality Application in Presence of Dynamic Light Sources
- Author
-
Anthony Laurent, Salma Jiddi, Eric Marchand, Pierrick Jouet, Caroline Baillard, Matthieu Fradet, Philippe Hobert, Technicolor [Cesson Sévigné], Technicolor, Sensor-based and interactive robotics (RAINBOW), Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-SIGNAUX ET IMAGES NUMÉRIQUES, ROBOTIQUE (IRISA-D5), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Université de Bretagne Sud (UBS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National de Recherche en Informatique et en Automatique (Inria)-École normale supérieure - Rennes (ENS Rennes)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-CentraleSupélec-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Bretagne Sud (UBS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), and Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-École normale supérieure - Rennes (ENS Rennes)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1)
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Reflectance ,Virtual reality ,Metaverse ,Photometry ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,[INFO]Computer Science [cs] ,Specular reflection ,Lighting ,ComputingMethodologies_COMPUTERGRAPHICS ,Scene analysis ,business.industry ,Modeling ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,020207 software engineering ,Textures ,Scene Analysis ,Reflectivity ,Mixed reality ,Mixed Reality ,RGB color model ,020201 artificial intelligence & image processing ,Augmented reality ,Artificial intelligence ,business ,Shadows - Abstract
International audience; In this work, we consider the challenge of achieving a coherent blending between real and virtual worlds in the context of a Mixed Reality (MR) scenario. Specifically, we have designed and implemented an interactive demonstrator that shows a realistic MR application without using any light probe. The proposed system takes as input the RGB stream of the real scene, and uses these data to recover both the position and intensity of light sources. The lighting can be static and/or dynamic and the geometry of the scene can be partially altered. Our system is robust in presence of specular effects and handles both uniform and/or textured surfaces.
- Published
- 2018
20. A Multi-resolution Approach for Color Correction of Textured Meshes
- Author
-
Matthieu Fradet, Caroline Baillard, and Mohammad Hossein Rouhani
- Subjects
Speedup ,Vignetting ,Markov random field ,Computer science ,Fragment (computer graphics) ,business.industry ,Color correction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Binary number ,020207 software engineering ,02 engineering and technology ,Computer Science::Computer Vision and Pattern Recognition ,Face (geometry) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Polygon mesh ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Mesh texturing is an essential part of 3D scene reconstruction that enables a more realistic perception than the geometry alone and even compensates for inaccurate geometry. In this work we present a flexible formulation for color correction of textured scenes based on color augmentation per face. It can be employed as a post-processing step after selecting the best keyframe per face to compensate for color differences between pairs of neighboring faces. We present a Markov Random Field (MRF) formulation to find the best keyframes as well as the optimal color augmentations. We use a simple model to avoid reflection and camera vignetting during the view selection. Our model for color correction finds the piecewise-linear augmentation to be added to the texture patches of faces. It encourages smoothness inside every fragment while compensating color differences along view transitions. Moreover, we speed up the optimization by breaking down the formulation into multiple binary MRFs that estimate the best augmentations from coarse to fine resolutions. The experimental results prove our method outperforming the state of the art methods.
- Published
- 2018
- Full Text
- View/download PDF
21. Scientific and Technical Prospects
- Author
-
Jean-Marie Normand, Caroline Baillard, Gaël Seydoux, Fabien Lotte, Philippe Guillotel, Anatole Lécuyer, Nicolas Mollet, Visual Geometry Group (VGG), University of Oxford [Oxford], Technicolor R & I [Cesson Sévigné], Technicolor, 3D interaction with virtual environments using body and mind (Hybrid), Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-MEDIA ET INTERACTIONS (IRISA-D6), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Bretagne Sud (UBS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National de Recherche en Informatique et en Automatique (Inria)-École normale supérieure - Rennes (ENS Rennes)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-CentraleSupélec-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Bretagne Sud (UBS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-École normale supérieure - Rennes (ENS Rennes)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Popular interaction with 3d content (Potioc), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), École Centrale de Nantes (ECN), Centre de recherche nantais Architectures Urbanités (CRENAU), Ambiances, Architectures, Urbanités (AAU ), École nationale supérieure d'architecture de Nantes (ENSA Nantes)-Ministère de la Culture et de la Communication (MCC)-École nationale supérieure d'architecture de Grenoble (ENSAG)-Centre National de la Recherche Scientifique (CNRS)-École Centrale de Nantes (ECN)-École nationale supérieure d'architecture de Nantes (ENSA Nantes)-Ministère de la Culture et de la Communication (MCC)-École nationale supérieure d'architecture de Grenoble (ENSAG)-Centre National de la Recherche Scientifique (CNRS)-École Centrale de Nantes (ECN), University of Oxford, Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Inria Bordeaux - Sud-Ouest, and École Centrale de Nantes (ECN)-École nationale supérieure d'architecture de Nantes (ENSA Nantes)-École nationale supérieure d'architecture de Grenoble (ENSAG)-Ministère de la Culture et de la Communication (MCC)-Centre National de la Recherche Scientifique (CNRS)-École Centrale de Nantes (ECN)-École nationale supérieure d'architecture de Nantes (ENSA Nantes)-École nationale supérieure d'architecture de Grenoble (ENSAG)-Ministère de la Culture et de la Communication (MCC)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Computer science ,media_common.quotation_subject ,User perception ,Illusion ,[INFO.INFO-MM]Computer Science [cs]/Multimedia [cs.MM] ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Field (computer science) ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Entertainment ,03 medical and health sciences ,0302 clinical medicine ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,Human–computer interaction ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-ET]Computer Science [cs]/Emerging Technologies [cs.ET] ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,[INFO.INFO-BT]Computer Science [cs]/Biotechnology ,030217 neurology & neurosurgery ,media_common - Abstract
International audience; This chapter offers a view of future technical and scientific prospects, related to major evolutions in use. It explains the impact of technological advances on applications in the entertainment field and, more generally, on the use of VR‐AR by the general public. The chapter then discusses the potential of brain‐computer interactions (BCI). It explains the working principle of BCIs and analyzes the possibilities opened up by alternative perception mechanisms for interactions in virtual reality (VR). The chapter looks at how user perceptions can be altered in VR using pseudo‐sensory feedback. Finally, it shows that it is possible to generate illusions of movement for an immobile user by altering their movement in VR, and thus overcoming the current limitations of VR technology related to movement in the VE.
- Published
- 2018
- Full Text
- View/download PDF
22. Multi-device mixed reality TV
- Author
-
Matthieu Fradet, Anthony Laurent, Vincent Alleaume, Pierrick Jouet, and Caroline Baillard
- Subjects
Multimedia ,Computer science ,Headset ,05 social sciences ,Home entertainment ,020207 software engineering ,02 engineering and technology ,Multi-user ,computer.software_genre ,Mixed reality ,Multi device ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Dimension (data warehouse) ,Joint (audio engineering) ,computer ,050107 human factors ,User feedback - Abstract
A multi-user experience extending a standard TV content with AR elements is presented. It runs with both a standard tablet and a premium MR headset, the Microsoft HoloLens. A virtual TV mosaic is displayed around the TV screen and used as a GUI to control both TV and MR content. This paper focuses on the collaborative and personalized dimension offered by the experience. Unlike most AR applications, it can be simultaneously run by several users using different devices. The users can share content with others while keeping a personalized display. The added-value of such an extended TV experience has been demonstrated through complementary types of content, and user feedback confirms a real interest in this new kind of home entertainment, at the same time immersive, interactive, collaborative and personalized.
- Published
- 2017
- Full Text
- View/download PDF
23. [POSTER] MR TV Mozaik: A New Mixed Reality Interactive TV Experience
- Author
-
Matthieu Fradet, Anthony Laurent, Vincent Alleaume, Pierrick Jouet, Fabien Servant, Tao Luo, Philippe Robert, and Caroline Baillard
- Subjects
business.product_category ,Multimedia ,Computer science ,Headset ,020207 software engineering ,02 engineering and technology ,Virtual reality ,computer.software_genre ,Metaverse ,Mixed reality ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Immersion (virtual reality) ,020201 artificial intelligence & image processing ,Augmented reality ,business ,Interactive television ,computer ,Headphones - Abstract
Technicolor has been investigating how Mixed Reality technology could impact the future of home entertainment. We have designed and implemented a system to extend a standard TV experience with AR content, using a consumer tablet or a headset. A virtual TV mosaic is displayed around the TV screen and used as a GUI to control both TV and MR content. Using this interface, the user is able to switch TV content, display meta-data in AR (subtitles, text information or program guide), enhance TV content with interactive 3D objects blended in the environment, or play a game in interaction with the real world. The interactions between the real and the virtual worlds are handled thanks to a scene analysis pre-processing stage, which provides information about both the geometry and the lighting of the real environment. The real-virtual interactions strongly contribute to reinforcement of the immersion feeling. User feedback shows that the concept is very promising.
- Published
- 2017
- Full Text
- View/download PDF
24. Mixed Reality Extended TV
- Author
-
Matthieu Fradet, Philippe Robert, Caroline Baillard, Vincent Alleaume, Fabien Servant, Pierrick Jouet, Tao Luo, and Anthony Laurent
- Subjects
Computer science ,business.industry ,Solid modeling ,Virtual reality ,Computational geometry ,Mixed reality ,Entertainment ,Computer graphics (images) ,Preprocessor ,RGB color model ,Augmented reality ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The Extended TV application allows to enhance an audiovisual content displayed on a TV, using Mixed Reality technology. During preprocessing, the close environment of the TV is scanned using a consumer depth camera. The captured RGB-D data are analyzed, providing models for both the 3D geometry and the lighting of the real scene. During runtime, the TV is watched through a tablet, and virtual objects can apparently come out of the screen and start populating the user's environment. Virtual objects can be occluded by real objects, and virtual shadows are consistent with the real ones.
- Published
- 2016
- Full Text
- View/download PDF
25. Realistic Road Modelling for Driving Simulators using GIS Data
- Author
-
Caroline Baillard and Guillaume Despine
- Subjects
Ground truth ,Virtual machine ,Road surface ,Driving simulator ,Traffic simulation ,Graph (abstract data type) ,3d model ,Data mining ,computer.software_genre ,Network topology ,computer ,Simulation - Abstract
In this paper an approach is proposed for creating realistic models of existing roads adapted to driving simulation. Unlike most previous work, based on generic construction rules, urbanism patterns or sociological behaviour, our approach aims at reproducing existing road networks. First a data model based on a multi-layered graph is presented. This model can handle the three representation levels required by traffic simulation: the road network, the graphical level and the traffic level. In the second part of the paper, a method for modelling existing roads is proposed. The novelty of the approach is the use of existing road data bases to automatically create a virtual environment (3D model and traffic organisation) close to ground truth. An existing 3D GIS data base provides accurate information about road axes, ground and building 3D geometry, whereas a navigation road database helps refining the model by providing clues on network topology and traffic rules. The resulting virtual road system reproduces the real world and has been successfully interfaced with a driving simulator.
- Published
- 2011
- Full Text
- View/download PDF
26. Automatic reconstruction of piecewise planar models from multiple views
- Author
-
Andrew Zisserman and Caroline Baillard
- Subjects
business.industry ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Initialization ,Pattern recognition ,Iterative reconstruction ,Planar ,Photogrammetry ,Robustness (computer science) ,Computer Science::Computer Vision and Pattern Recognition ,Piecewise ,Computer vision ,Artificial intelligence ,business ,Multiple view ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
A new method is described for automatically reconstructing 3D planar faces from multiple images of a scene. The novelty of the approach lies in the use of inter-image homographies to validate and best estimate the plane, and in the minimal initialization requirements-only a single 3D line with a textured neighbourhood is required to generate a plane hypothesis. The planar facets enable line grouping and also the construction of parts of the wireframe which were missed due to the inevitable shortcomings of feature detection and matching. The method allows a piecewise planar model of a scene to be built completely automatically, with no user intervention at any stage, given only the images and camera projection matrices as input. The robustness and reliability of the method are illustrated on several examples, from both aerial and interior views.
- Published
- 2003
- Full Text
- View/download PDF
27. From Images to Virtual and Augmented Reality
- Author
-
Caroline Baillard, Andrew Fitzgibbon, Geoffrey Cross, and Andrew Zisserman
- Subjects
Sequence ,business.industry ,Computer science ,Calibration (statistics) ,Epipolar geometry ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer-mediated reality ,Mixed reality ,Augmented reality ,Computer vision ,Artificial intelligence ,business ,Scale (map) ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We describe a method to completely automatically recover 3D scene structure together with a camera for each frame from a sequence of images acquired by an unknown camera undergoing unknown movement. Previous approaches have used calibration objects or landmarks to recover this information, and are therefore often limited to a particular scale. The approach of this paper is far more general, since the “landmarks” are derived directly from the imaged scene texture. The method can be applied to a large class of scenes and motions, and is demonstrated here for sequences of interior and exterior scenes using both controlled-motion and hand-held cameras.
- Published
- 2000
- Full Text
- View/download PDF
28. Above-Ground Objects in Urban Scenes from Medium Scale Aerial Imagery
- Author
-
Henri Maitre, Olivier Jamet, Caroline Baillard, and Olivier Dissard
- Subjects
Above ground ,Geography ,Stereo image ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Stage (hydrology) ,Artificial intelligence ,business ,Digital elevation model ,Medium scale ,Aerial image ,Aerial imagery - Abstract
In this paper, we address the problem of handling intricate urban scenes with stereo pairs of images, for the reconstruction of the 3D objects, when one has no idea of the types of 3D objects and their arrangement to be found. Firstly we argue for a two stage focusing analysis: first, an attention focusing stage based only on image criteria provides areas of interest, then a model-driven characterization of these areas leads to the model-based reconstruction of the objects.
- Published
- 1997
- Full Text
- View/download PDF
29. Detection of above ground in urban area: application to DTM generation
- Author
-
Olivier Jamet, Henri Maitre, Caroline Baillard, and Olivier Dissard
- Subjects
business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Stereoscopy ,Image processing ,Terrain ,Image segmentation ,3D modeling ,law.invention ,Geography ,law ,Computer vision ,Segmentation ,Artificial intelligence ,business ,Image resolution ,Remote sensing - Abstract
A new approach to the detection of above-ground from a pair of stereoscopic images in a general urban context is proposed. It includes a stereoscopic matching stage well- adapted to our task in order to provide a digital surface model (DSM). Then a segmentation of the DSM is performed, and regions are classified as ground or above-ground. The interest of the method is its ability to manage extended above-ground with several heights and any shape, as well as the case of a sloping ground. An application to digital terrain models (DTM) generation in urban areas is discussed. Assessment of both above-ground extraction and DTM generation on difficult scenes shows the feasibility of the approach.
- Published
- 1996
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.