1. Visual-based Global Localization from Ceiling Images using Convolutional Neural Networks
- Author
-
Olivier Aycard, Philip Scales, Mykhailo Rimel, Groupe d’Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole (GETALP), Laboratoire d'Informatique de Grenoble (LIG), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA), Artificial Intelligence and Robotics (Marvin ), École nationale supérieure d'informatique et de mathématiques appliquées (ENSIMAG), Université Joseph Fourier - Grenoble 1 (UJF), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
Visual-based Localization ,business.industry ,Computer science ,Global localization ,Ceiling (cloud) ,Convolutional neural network ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Mobile Robot ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Computer vision ,Artificial intelligence ,business ,CNN - Abstract
International audience; The problem of global localization consists in determining the position of a mobile robot inside its environment without any prior knowledge of its position. Existing approaches for indoor localization present drawbacks such as the need to prepare the environment, dependency on specific features of the environment, and high quality sensor and computing hardware requirements. We focus on ceiling-based localization that is usable in crowded areas and does not require expensive hardware. While the global goal of our research is to develop a complete robust global indoor localization framework for a wheeled mobile robot, in this paper we focus on one part of this framework-being able to determine a robot's pose (2-DoF position plus orientation) from a single ceiling image. We use convolutional neural networks to learn the correspondence between a single image of the ceiling of the room, and the mobile robot's pose. We conduct experiments in real-world indoor environments that are significantly larger than those used in state of the art learning-based 6-DoF pose estimation methods. In spite of the difference in environment size, our method yields comparable accuracy.
- Published
- 2021
- Full Text
- View/download PDF