281 results on '"Automática, Robótica y Visión Artificial"'
Search Results
2. Cambio automatizado de una bombilla
- Author
-
Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Automática, Robótica y Visión Artificial, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Abstract
Este vídeo muestra la tarea automatizada del cambio de una bombilla de una farola. Se ha utilizado seguimiento intemporal de trayectorias basado en control visual para desplazamientos largos del robot. También se ha controlado la fuerza ejercida por una mano de Barrett para la correcta manipulación de la bombilla. Para detectar la posición del casquillo de la farola se han utilizado las medidas de una cámara de rango. Más información en http://www.aurova.ua.es:8080/proyectos/dpi2005/.
- Published
- 2010
3. MOSPPA: monitoring system for palletised packaging recognition and tracking
- Author
-
Julio Castaño-Amoros, Francisco Fuentes, Pablo Gil, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
Pallets recognition and tracking ,Pallets workflow control ,Control and Systems Engineering ,Cardboard packaging ,Mechanical Engineering ,Manufacturing automation ,Industrial and Manufacturing Engineering ,Software ,Computer Science Applications - Abstract
The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the cardboard packaging. This process can be slow, causing congestion on the production line. To optimise the logistics of this process, we propose a visual recognition and tracking pipeline that monitors the palletised packaging while it is moving inside the factory on roller conveyors. Our pipeline has a two-stage architecture composed of Convolutional Neural Networks, one for oriented pallet detection and recognition, and another with which to track identified pallets. We carried out an extensive study using different methods for the pallet detection and tracking tasks and discovered that the oriented object detection approach was the most suitable. Our proposal recognises and tracks different configurations and visual appearance of palletised packaging, providing statistical data in real time with which to assist human operators in decision-making. We tested the precision-performance of the system at the Smurfit Kappa facilities. Our proposal attained an Average Precision (AP) of 0.93 at 14 Frames Per Second (FPS), losing only 1% of detections. Our system is, therefore, able to optimise and speed up the process of logistic distribution. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research work was partially funded by the private project (SMURFITKAPPA-21), supported by Smurfit Kappa Iberoamericana S.A. and University of Alicante.
- Published
- 2023
- Full Text
- View/download PDF
4. Manipulación visual-táctil para la recogida de residuos domésticos en exteriores
- Author
-
Castaño-Amorós, Julio, Páez-Ubieta, Ignacio de Loyola, Gil, Pablo, Puente, Santiago Timoteo, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Reconocimiento de objetos ,Percepción táctil ,Robotic manipulation ,General Computer Science ,Object location ,Control and Systems Engineering ,Detección visual ,Localización de objetos ,Visual detection ,Object recognition ,Tactile perception ,Manipulación robótica - Abstract
[EN] This work presents a perception system applied to robotic manipulation, that is able to assist in navegation, household waste classification and collection in outdoor environments. This system is made up of optical tactile sensors, RGBD cameras and a LiDAR. These sensors are integrated on a mobile platform with a robot manipulator and a robotic gripper. Our system is divided in three software modules, two of them are vision-based and the last one is tactile-based. The vision-based modules use CNNs to localize and recognize solid household waste, together with the grasping points estimation. The tactile-based module, which also uses CNNs and image processing, adjusts the gripper opening to control the grasping from touch data. Our proposal achieves localization errors around 6 %, a recognition accuracy of 98% and ensures the grasping stability the 91% of the attempts. The sum of runtimes of the three modules is less than 750 ms., [ES] Este artículo presenta un sistema de percepcion orientado a la manipulación robótica, capaz de asistir en tareas de navegación, clasificacion y recogida de residuos domésticos en exterior. El sistema está compuesto de sensores táctiles ópticos, cámaras RGBD y un LiDAR. Estos se integran en una plataforma móvil que transporta un robot manipulador con pinza. El sistema consta de tres modulos software, dos visuales y uno táctil. Los módulos visuales implementan arquitecturas CNNs para la localización y reconocimiento de residuos sólidos, además de estimar puntos de agarre. El módulo táctil, también basado en CNNs y procesamiento de imagen, regula la apertura de la pinza para controlar el agarre a partir de informacion de contacto. Nuestra propuesta tiene errores de localizacion entorno al 6 %, una precisión de reconocimiento del 98 %, y garantiza estabilidad de agarre el 91 % de las veces. Los tres modulos trabajan en tiempos inferiores a los 750 ms., Este trabajo ha sido financiado con Fondos Europeos de Desarrollo Regional (FEDER), el gobierno de la Generalitat Valenciana a través del proyecto PROMETEO/2021/075, y los recursos computaciones fueron financiados a traves de la ayuda IDIFEDER/2020/003.
- Published
- 2022
- Full Text
- View/download PDF
5. Measuring Object Rotation via Visuo-Tactile Segmentation of Grasping Region
- Author
-
Castaño Amorós, Julio, Gil, Pablo, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Grasping ,Force and Tactile Sensing ,Perception for Grasping and Manipulation - Abstract
When carrying out robotic manipulation tasks, objects occasionally fall as a result of the rotation caused by slippage. This can be prevented by obtaining tactile information that provides better knowledge on the physical properties of the grasping. In this paper, we estimate the rotation angle of a grasped object when slippage occurs. We implement a system made up of a neural network with which to segment the contact region and an algorithm with which to estimate the rotated angle of that region. This method is applied to DIGIT tactile sensors. Our system has additionally been trained and tested with our publicly available dataset which is, to the best of our knowledge, the first dataset related to tactile segmentation from non-synthetic images to appear in the literature, and with which we have attained results of 95% and 90% as regards Dice and IoU metrics in the worst scenario. Moreover, we have obtained a maximum error of ≈ 3° when testing with objects not previously seen by our system in 45 different lifts. This, therefore, proved that our approach is able to detect the slippage movement, thus providing a possible reaction that will prevent the object from falling. This work was supported by the Ministry of Science and Innovation of the Spanish Government through the research project PID2021-122685OB-I00 and by the University of Alicante through the grant UAFPU21-26.
- Published
- 2023
6. Rotational Slippage Prediction from Segmentation of Tactile Images
- Author
-
Castaño Amorós, Julio, Gil, Pablo, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
FOS: Computer and information sciences ,Agarre robótico ,Computer Science - Robotics ,Reconocimiento táctil ,Visión por Computador ,Detección Táctil ,DIGIT ,Inteligencia Artificial ,Robótica ,Robotics (cs.RO) ,Manipulación robótica - Abstract
Adding tactile sensors to a robotic system is becoming a common practice to achieve more complex manipulation skills than those robotics systems that only use external cameras to manipulate objects. The key of tactile sensors is that they provide extra information about the physical properties of the grasping. In this paper, we implemented a system to predict and quantify the rotational slippage of objects in hand using the vision-based tactile sensor known as Digit. Our system comprises a neural network that obtains the segmented contact region (object-sensor), to later calculate the slippage rotation angle from this region using a thinning algorithm. Besides, we created our own tactile segmentation dataset, which is the first one in the literature as far as we are concerned, to train and evaluate our neural network, obtaining results of 95% and 91% in Dice and IoU metrics. In real-scenario experiments, our system is able to predict rotational slippage with a maximum mean rotational error of 3 degrees with previously unseen objects. Thus, our system can be used to prevent an object from falling due to its slippage., 3 pages, 4 figures, accepted at ICRA 2023 ViTac Workshop: Blending Virtual and Real Visuo-Tactile Perception
- Published
- 2023
7. Estado del arte de la educación en automática
- Author
-
Muñoz de la Peña, David, Domínguez, Manuel, Gómez-Estern Aguilar, Fabio, Reinoso, Óscar, Torres, Fernando, Dormido, Sebastián, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Automática, Robótica y Visión Artificial, Universidad de Sevilla. Departamento de Ingeniería de Sistemas y Automática, and Universidad de Sevilla. TEP950: Estimación, Predicción, Optimización y Control
- Subjects
Relaciones con la industria ,Evaluación automática ,General Computer Science ,Distance learning and learning management systems ,E-learning ,Laboratorios virtuales y remotos ,Formación continua ,Interactive tools, virtual and remote laboratories ,Herramientas interactivas, laboratorios virtuales y remotos ,Educación a distancia y sistemas de gestión del aprendizaje ,Control engineering curriculum ,Long-life learning ,Herramientas interactivas ,Teaching methodologies ,Industry relations ,Teaching tools and laboratories ,Herramientas docentes y laboratorios ,Virtual and remote laboratories ,Interactive tools ,Prácticas docentes ,Experimental platforms ,Entornos de experimentación ,Control and Systems Engineering ,Curricula del ingeniero de control ,Automatic evaluation ,Ingeniería de Sistemas y Automática - Abstract
[EN] Control education is a mature area in which many professors and researchers have worked hard to face the challenge of providing a versatile education, with a strong scientific base. All this without losing sight of the needs of the industry; adapting the contents, methodologies and tools to the continuous social and technological changes of our time. This article presents a reflection on the role of automation in today s society, a review of the traditional objectives of control education through seminal works in the area and finally a review of the main current trends., [ES] La educación en automática es un área madura en la que multitud de profesores e investigadores han trabajado intensamente para afrontar el reto de proporcionar una educación versátil, con una fuerte base científica. Todo ello sin perder de vista las necesidades de la industria; adaptando los contenidos, las metodologías y las herramientas a los continuos cambios sociales y tecnológicos de nuestro tiempo. Este artículo presenta una reflexión sobre el papel de la automática en la sociedad actual, una revisión de los objetivos tradicionales de la educación en automática a través de los trabajos seminales del área y finalmente una revisión de las principales tendencias actuales.
- Published
- 2022
- Full Text
- View/download PDF
8. Assistance Robotics and Sensors
- Author
-
Santiago T. Puente, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
Sensors ,Electrical and Electronic Engineering ,Assistance Robotics ,Biochemistry ,Instrumentation ,Atomic and Molecular Physics, and Optics ,Analytical Chemistry - Abstract
In recent years, the exploitation of assistive robotics has experienced significant growth, mostly based on the development of sensor and processing technologies with the increasing interest in improving the interactions between robots and humans and making them more natural [...]
- Published
- 2023
9. Robust Self-Tuning Data Association for Geo-Referencing Using Lane Markings
- Author
-
Miguel Angel Munoz-Banon, Jan-Hendrik Pauls, Haohao Hu, Christoph Stiller, Francisco A. Candelas, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
FOS: Computer and information sciences ,Control and Optimization ,Mechanical Engineering ,Computer Vision and Pattern Recognition (cs.CV) ,Biomedical Engineering ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science Applications ,Human-Computer Interaction ,Autonomous vehicle navigation ,Computer Science - Robotics ,Artificial Intelligence ,Control and Systems Engineering ,Localization ,Computer Vision and Pattern Recognition ,Robotics (cs.RO) - Abstract
Localization in aerial imagery-based maps offers many advantages, such as global consistency, geo-referenced maps, and the availability of publicly accessible data. However, the landmarks that can be observed from both aerial imagery and on-board sensors is limited. This leads to ambiguities or aliasing during the data association. Building upon a highly informative representation (that allows efficient data association), this paper presents a complete pipeline for resolving these ambiguities. Its core is a robust self-tuning data association that adapts the search area depending on the entropy of the measurements. Additionally, to smooth the final result, we adjust the information matrix for the associated data as a function of the relative transform produced by the data association process. We evaluate our method on real data from urban and rural scenarios around the city of Karlsruhe in Germany. We compare state-of-the-art outlier mitigation methods with our self-tuning approach, demonstrating a considerable improvement, especially for outer-urban scenarios., The paper is being considered for publication in "IEEE Robotics and Automation Letters" (RA-L)
- Published
- 2022
10. Editorial: Robotic Handling of Deformable Objects
- Author
-
Jihong Zhu, Claire Dune, Miguel Aranda, Youcef Mezouar, Juan Antonio Corrales, Pablo Gil, Gonzalo Lopez-Nicolas, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Human-Computer Interaction ,Control and Optimization ,Robotic handling ,Deformable objects ,Artificial Intelligence ,Control and Systems Engineering ,Mechanical Engineering ,Biomedical Engineering ,Computer Vision and Pattern Recognition ,Computer Science Applications - Abstract
The papers in this special section focus on robotic handling of deformable objects. Object handling and manipulation is a recurrent theme in robotics, moving progressively from simple rigid objects to articulated objects and then complex deformable objects. In recent years, there has been a growing interest in the development of robotic systems that are able to handle deformable objects, due to many potential applications in the industrial and domestic field but also in robotics in remote hostile environments. Despite the promising prospects, considering object deformation in grasping and manipulation poses significant challenges affecting almost all aspects of robotics: perception, modeling, planning, control and grasping.
- Published
- 2022
11. Towards the robotic collection of household waste in outdoor: visual-tactil approach
- Author
-
Gil, Pablo, Puente Méndez, Santiago T., Castaño Amorós, Julio, Páez Ubieta, Ignacio de Loyola, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Object recognition and location ,Percepción táctil ,Detección visual y táctil ,Robotic manipulation ,Reconocimiento y localización de objetos ,Visual-tactile detection ,Tactile perception ,Manipulación robótica ,Ingeniería de Sistemas y Automática - Abstract
Comunicación presentada en las XII Jornadas Nacionales de Robótica, Málaga, 18-20 mayo 2022. Este artículo presenta un sistema de manipulación robótica para la recogida de ciertos tipos de basura doméstica en exteriores. El sistema está compuesto de un robot con pinza dotada de sensores táctiles, de varias cámaras RGBD y de un LiDAR. El sistema implementa tres módulos de software. Dos de ellos basados en percepción visual y un tercero en táctil. Uno para la tarea de encontrar y localizar posible basura y el otro, destinado a reconocer el objeto como basura, catalogarlo y estimar su agarre. El módulo táctil se emplea para controlar el agarre a partir de información de contacto. Los módulos de percepción se han probado en ambientes exteriores, obteniendo un error medio en la localización del 6 % de la distancia del objeto, un mAP75 en detección y reconocimiento del 98 % y una precisión táctil del 91%. Todo ello con tiempos totales medios de ejecución inferiores a 350 ms. In this work, we present a robotic handling system for the collection of certain types of household waste in outdoor. The system is made up of a robot with a gripper equipped with touch sensors, several RGBD cameras and a LiDAR. The system consists in three software modules. Two of them are based on visual perception and the third on tactile perception. The visual module is implemented to localize candidate objects to be considered waste and then to recognize them and estimate the grip for each one of them. The tactile module is used to control the grip using touch data. All perception modules of the robotic system are being tested in outdoor environments, obtaining an average location error less than the 6% of the object distances, a mAP75 in detection and recognition of 98% and a tactile accuracy of 91%. The sum of runtimes of the three modules is less than 350 ms. Esta investigación ha sido financiada por el gobierno regional de la Generalitat Valenciana y FEDER a través del proyecto PROMETEO/2021/075, y los recursos computacionales fueron financiados a través de la ayuda IDIFE-FER/2020/003.
- Published
- 2022
12. Detection and Location of Domestic Waste for Planning Its Collection Using an Autonomous Robot
- Author
-
Pascual Tornero, Santiago Puente, Pablo Gil, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Domestic waste ,Autonomous robot ,Robot navigation ,Deep learning ,Detection and location objects ,Ingeniería de Sistemas y Automática - Abstract
Paper submitted to the 8th International Conference on Control, Automation and Robotics (ICCAR), Xiamen, China, April 8-10, 2022. This paper presents an approach of a detection and location system for waste recognition in outdoor environments that can be usable on an autonomous robot for garbage collection. It is composed of a camera and a LiDAR. For the detection task, some YOLO models were trained and tested for classification of waste by using a own dataset acquired from the camera. The image coordinates predicted by the best detector are used in order to compute the location relative to the camera. Then, we used the LiDAR to get a global waste location relative to the robot, transforming the coordinates of the center of each trash instance. Our detection approach was tested in outdoor environments obtaining a mAP@.5 around 0.99 and a mAP@.95 over 0.84, and an average time of detection less than 40 ms., being able to make it in real time. The location method was also tested in presence of objects at a maximum distance of 8 m., obtaining an average error smaller than 0.25 m. This research was funded by Spanish Government through the project RTI2018-094279-B-I00. Besides, computer facilities were provided by Valencian Government and FEDER through the IDIFEFER/2020/003.
- Published
- 2022
- Full Text
- View/download PDF
13. Overview and future trends of control education
- Author
-
Muñoz de la Peña, David, Domínguez, Manuel, Gomez-Estern, Fabio, Reinoso García, Óscar, Torres, Fernando, Dormido Bencomo, Sebastián, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Automática, Robótica y Visión Artificial, Universidad de Sevilla. Departamento de Ingeniería de Sistemas y Automática, and Universidad de Sevilla. TEP950: Estimación, Predicción, Optimización y Control
- Subjects
Control and Systems Engineering ,Cultural ,Challenges for control engineering curricula ,Social issues of control education ,Pedagogy in control engineering ,Cultural and social issues of control education - Abstract
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Control education is a mature area in which many professors and researchers have worked hard to face the challenge of providing a versatile education with a strong scientific base. All this without losing sight of the needs of the industry; adapting the contents, methodologies, and tools to the continuous social and technological changes of our time. This article presents a reflection on the role of automatic control in today's society, a review of the traditional objectives of control education through seminal work in the area, and finally a review of the main current trends.
- Published
- 2022
14. Lidar Odometry and GNSS Fusion Through Relative Transforms
- Author
-
Muñoz-Bañón, Miguel Á., Velasco, Edison P., Candelas-Herías, Francisco A., Torres, Fernando, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
Localización ,LiDAR odometry ,GNSS ,GNSS-odometry fusion ,Robótica móvil ,Localization ,Odometría LiDAR ,Fusión GNSS-odometría ,Mobile robotics - Abstract
En este trabajo se ha desarrollado un método para evitar la acumulación de errores en una odometría LiDAR, así como para aportar consistencia global a la misma, mediante la fusión con un sistema multi-GNSS. El método desarrollado estima la transformación relativa entre el frame de coordenadas de odometría y el frame de coordenadas de mapa definido por el GNSS. Al estar basado en la estimación de una transformación en vez de una trayectoria completa, el algoritmo resulta extremadamente ligero, ya que el número de parámetros a estimar es reducido y constante con el tiempo, a diferencia de los que se suelen emplear en la literatura. El método propuesto ha sido validado en el parque científico de la Universidad de Alicante donde se ha navegado de forma autónoma durante más de 20 km sin errores acumulativos. In this work, a method has been developed to avoid the error accumulations in LiDAR odometry and provide global consistency to it by fusion with a multi-GNSS system. The developed method estimates the relative transformation between the odometry coordinate frame and the map coordinate frame defined by the GNSS. By using the transformation estimation instead of a complete trajectory, the algorithm is highly light since the number of parameters to be estimated is reduced and constant over time, unlike those usually used in the state-of-the-art. The proposed method has been validated in the University of Alicante scientific park, which has been navigated autonomously for more than 20 km without accumulative errors. Esta investigación ha sido financiada por la Generalitat Valenciana y FEDER, y el Ministerio de Innovación y Universidades a través de los proyectos PROMETEO/2021/075, RTI2018-094279-B-I00 y la beca ACIF/2019/088.
- Published
- 2022
15. Desarrollos en BLUE para manipulación móvil en entornos exteriores no estructurados
- Author
-
Julio Castaño-Amorós, Ignacio de L. Páez-Ubieta, Miguel Ángel Muñoz-Bañón, Edison Velasco-Sánchez, Francisco A. Candelas Herías, Santiago T. Puente Méndez, Pablo Gil Vázquez, Fernando Torres Medina, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Localización ,Navegación ,Robótica móvil ,Localization ,Unstructured environments ,Entornos no estructurados ,Manipulación de objetos ,Mobile robotics ,Object manipulation ,Navigation ,Ingeniería de Sistemas y Automática ,UGV - Abstract
[Resumen] La investigación robótica requiere plataformas experimentales que tengan una arquitectura abierta. Existen varios equipos comerciales con estas características, pero tienen un gran coste económico, siendo esto un inconveniente en el desarrollo de algoritmos. En este artículo presentamos los avances implementados en nuestro vehículo terrestre no tripulado BLUE enfocados a la investigación y desarrollo de métodos para manipulación móvil en entornos exteriores no estructurados. Además, detallamos el hardware y software añadidos a BLUE en los últimos años y mostramos los paquetes y experimentos que se han realizado tanto para las tareas de localización y navegación, como para las operaciones de manipulación de objetos en robótica móvil. [Abstract] Robotic research requires experimental platforms with an open architecture. Several commercial equipment with these characteristics exist, but they have a high economic cost, being a disadvantage in the development of algorithms. In this paper we present the advances implemented in our BLUE unmanned ground vehicle focused on the development of Mobile Manipulation for Unstructured Outdoor Environments (MOMUE). In addition, we detail the hardware and software incorporated throughout the project and show the packages and experiments we have performed for object localization, navigation and manipulation in mobile robotics Esta investigación ha sido financiada por el gobierno regional de la Generalitat Valenciana, FEDER y el Ministerio Español de Innovación y Universidades a través de los proyectos PROMETEO/2021/075, IDIFEFER/2020/003, RTI2018-094279-B-I00 y la beca ACIF/2019/088 Generalitat Valenciana. Conselleria d’Innovació, Universitats, Ciència i Societat Digital; PROMETEO/2021/075 Generalitat Valenciana; IDIFEFER/2020/003
- Published
- 2022
16. Integration and Evaluatioon of a Multi-GNSS System in an Unmanned Ground Vehicle
- Author
-
Miguel Ángel Muñoz-Bañón, Edison P. Velasco, Francisco A. Candelas-Herías, Santiago Timoteo Puente Méndez, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
u-blox ,Robótica móvil ,Political science ,Multi-GNSS ,GNSS-RTK ,Mobile robotics ,Industry 4.0 ,Humanities ,Ingeniería de Sistemas y Automática - Abstract
[Resumen] En este trabajo se presenta un sistema Multi-GNSS de bajo coste conformado por 3 módulos UbloxNeo-m8n que ha sido implementado sobre un vehículo terrestre no tripulado, y se compara con un sistema GNSS-RTK (u-blox C94-M8P). La redundancia de datos del sistema multi-GNSS permite un mayor número de muestras y un mejor filtrado de datos. Mediante experimentos en distintos circuitos, se han obtenido resultados donde el sistema puede llegar a una frecuencia de muestreo de 3 Hz. Además, el sistema Multi-GNSS presenta un menor error que en un sistema GNSS-RTK (ublox C94-M8P) cuando este último no tiene línea de vista directa hacia su antena base RTK. [Abstract] In this work, we present a low cost Multi-GNSS system that has 3 UbloxNeo-M8N modules in an Unmanned Ground Vehicle and this is compared with a GNSS-RTK system (u-blox C94-M8P). Data redundancy of the Multi-GNSS system allows a greater number of samples and better filtering data. Through experiments on different circuits, we obtained sample rates of 3 Hz. In addition, the Multi-GNSS system has a smaller error compared to a GNSSRTK system (u-blox C94-M8P) when it doesn’t have a direct line view of the RTK base station. Este trabajo ha sido realizado como parte del proyecto RTI2018-094279-B-I00, financiado por el Ministerio de Ciencia, Innovación y Universidades, por la Generalitat Valenciana y el Fondo Europeo de Desarrollo Regional (FEDER), a través de las becas PRE2019-088069 y ACIF/2019/088 respectivamente.
- Published
- 2021
- Full Text
- View/download PDF
17. Targetless Camera-LiDAR Calibration in Unstructured Environments
- Author
-
Miguel Ángel Muñoz-Bañón, Francisco A. Candelas, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,General Computer Science ,Calibration (statistics) ,Computer science ,European Regional Development Fund ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Valencian community ,target-less calibration ,Camera-LiDAR ,020901 industrial engineering & automation ,Target-less calibration ,mobile robots ,0202 electrical engineering, electronic engineering, information engineering ,Mobile robots ,General Materials Science ,Extrinsic calibration ,Remote sensing ,sensor fusion ,Government ,Sensor fusion ,General Engineering ,extrinsic calibration ,Mobile robot ,Lidar ,Work (electrical) ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:TK1-9971 ,Ingeniería de Sistemas y Automática - Abstract
The camera-Lidar sensor fusion plays an important role in autonomous navigation research. Nowadays, the automatic calibration of these sensors remains a significant challenge in mobile robotics. In this article, we present a novel calibration method that achieves an accurate six-degree-of-freedom (6-DOF) rigid-body transformation estimation (aka extrinsic parameters) between the camera and LiDAR sensors. This method consists of a novel co-registration approach that uses local edge features in arbitrary environments to get 3D-to-2D errors between the data of both, camera and LiDAR. Once we have 3D-to-2D errors, we estimate the relative transform, i.e., the extrinsic parameters, that minimizes these errors. In order to find the best transform solution, we use the perspective-three-point (P3P) algorithm. To refine the final calibration, we use a Kalman Filter, which gives the system high stability against noise disturbances. The presented method does not require, in any case, an artificial target, or a structured environment, and therefore, it is a target-less calibration. Furthermore, the method we present in this article does not require to achieve a dense point cloud, which holds the advantage of not needing a scan accumulation. To test our approach, we use the state-of-the-art Kitti dataset, taking the calibration provided by the dataset as the ground truth. In this way, we achieve accuracy results, and we demonstrate the robustness of the system against very noisy observations. This work was supported by the Regional Valencian Community Government and the European Regional Development Fund (ERDF) through the grants ACIF/2019/088 and AICO/2019/020.
- Published
- 2020
18. Deeper in BLUE
- Author
-
Saúl Cova-Rocamora, Francisco A. Candelas, Miguel Ángel Muñoz-Bañón, Iván del Pino, Miguel Contreras, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Unmanned ground vehicle ,Computer science ,Interface (Java) ,Process (engineering) ,media_common.quotation_subject ,02 engineering and technology ,Industrial and Manufacturing Engineering ,020901 industrial engineering & automation ,Artificial Intelligence ,Mobile robot ,Electrical and Electronic Engineering ,Productivity ,media_common ,GNSS ,business.industry ,Mechanical Engineering ,ROS ,Replicate ,Low-level control ,Extended Kalman filter ,Debugging ,Control and Systems Engineering ,Localization ,SLAM ,Robot ,Software engineering ,business ,Software ,Ingeniería de Sistemas y Automática - Abstract
Despite the progress that has been made with simulators and the existence of datasets, real experimental platforms are, and will continue to be necessary. Well-designed research platforms that produce reliable results and are easy to operate and debug make all the difference in research productivity. In this paper, we show the works that turned a stock electric cart into a research robot called BLUE. It provides a ROS interface that allows real-time control, monitoring, and adjustment of the system. We provide a quantitative performance evaluation, and a GitHub repository that contains all the information required to replicate the process. This work has been supported by the Spanish Government through the FPU grant FPU15/04446 and the research project DPI2015-68087-R.
- Published
- 2019
- Full Text
- View/download PDF
19. Domestic waste detection and grasping points for robotic picking up
- Author
-
Gea, Víctor de, Puente Méndez, Santiago T., Gil, Pablo, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Domestic waste ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Robotic Grasping ,Deep learning ,Machine Learning (cs.LG) ,3D Points Grasping ,Geograsp ,Computer Science - Robotics ,Object Detection ,Mask-RCNN ,Robotics (cs.RO) ,Ingeniería de Sistemas y Automática - Abstract
This paper presents an AI system applied to location and robotic grasping. Experimental setup is based on a parameter study to train a deep-learning network based on Mask-RCNN to perform waste location in indoor and outdoor environment, using five different classes and generating a new waste dataset. Initially the AI system obtain the RGBD data of the environment, followed by the detection of objects using the neural network. Later, the 3D object shape is computed using the network result and the depth channel. Finally, the shape is used to compute grasping for a robot arm with a two-finger gripper. The objective is to classify the waste in groups to improve a recycling strategy., Comment: 2 pages, 3 figures, accepted as poster for presentation in ICRA 2021 Workshop: Emerging paradigms for robotic manipulation: from the lab to the productive world
- Published
- 2021
20. Dual-Branch CNNs for Vehicle Detection and Tracking on LiDAR Data
- Author
-
Alberto Sanfeliu, Francesc Moreno-Noguer, Victor Vaquero, Iván del Pino, Juan Andrade-Cetto, Joan Sola, Ministerio de Economía y Competitividad (España), European Commission, Ministerio de Ciencia, Innovación y Universidades (España), Agencia Estatal de Investigación (España), Universidad de Alicante. Instituto Universitario de Investigación Informática, Automática, Robótica y Visión Artificial, Institut de Robòtica i Informàtica Industrial, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents, and Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
- Subjects
LiDAR ,Computer science ,Point cloud ,Tracking (particle physics) ,Convolutional neural network ,Minimum bounding box ,Bounding overwatch ,0502 economics and business ,Classifier (linguistics) ,Computer vision ,050210 logistics & transportation ,Robòtica ,business.industry ,Mechanical Engineering ,05 social sciences ,Tracking system ,Robotics ,Computer Science Applications ,Vehicle detection and tracking ,Lidar ,Automotive Engineering ,Artificial intelligence ,Deep convolutional neural network ,business ,Informàtica::Robòtica [Àrees temàtiques de la UPC] ,Robots ,Ingeniería de Sistemas y Automática - Abstract
We present a novel vehicle detection and tracking system that works solely on 3D LiDAR information. Our approach segments vehicles using a dual-view representation of the 3D LiDAR point cloud on two independently trained convolutional neural networks, one for each view. A bounding box growing algorithm is applied to the fused output of the networks to properly enclose the segmented vehicles. Bounding boxes are grown using a probabilistic method that takes into account also occluded areas. The final vehicle bounding boxes act as observations for a multi-hypothesis tracking system which allows to estimate the position and velocity of the observed vehicles. We thoroughly evaluate our system on the KITTI benchmarks both for detection and tracking separately and show that our dual-branch classifier consistently outperforms previous single-branch approaches, improving or directly competing to other state of the art LiDAR-based methods., This work is supported by the EU project LOGIMATIC (H2020-Galileo2015-1-687534), the Spanish State Research Agency through the Mar´ıa de Maeztu Seal of Excellence to IRI (MDM-2016-0656), the ColRobTransp project (DPI2016-78957-R AEI/FEDER EU), the EB-SLAM project (DPI2017-89564-P), and the FPU grant FPU15/04446.
- Published
- 2021
21. Clasificación y manipulación de basura doméstica utilizando deep-learning
- Author
-
Santiago Timoteo Puente Méndez, Víctor de Gea, Pablo Gil, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Deep Learning ,Grasping ,media_common.quotation_subject ,Perception for Grasping ,Art ,Humanities ,media_common ,Ingeniería de Sistemas y Automática - Abstract
Este artículo presenta una aplicación de reconocimiento mediante el uso de redes de aprendizaje profundo para llevar a cabo la clasificación de basura en el ámbito doméstico. Así mismo, una vez realizado el reconocimiento se determina su localización, para poder obtener los puntos de agarre para que un brazo robot dotado de una pinza de dedos paralelos pueda hacerlo de manera automática. Se presenta el algoritmo utilizado, así como, los resultados experimentales que permiten comprobar la bondad de la propuesta. This paper presents an application of recognition through the use of deep learning networks to carry out domestic waste classification. Likewise, once the recognition is carried out, it is used to determine its location, in order to obtain gripping points so that a robot arm equipped with a two-finger parallel gripper performs it automatically. Initially, the algorithm used is explained, as well as the experimental results that allow to verify the goodness of the proposal. Este trabajo ha sido financiado por la Comisión Europea y los fondos FEDER por medio del proyecto COMMANDIA (SOE2/P1/F0638), dentro del programa Interreg-V Sudoe. Para la fase de entrenamiento se ha usado la infraestructura computacional financiada por la Generalitat Valenciana y los fondos FEDER en el proyecto IDIFEDER/2020/003.
- Published
- 2021
22. Touch Detection with Low-cost Visual-based Sensor
- Author
-
Pablo Gil, S. T. Puente, Julio Castaño-Amoros, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Government ,Work (electrical) ,Computer science ,business.industry ,Tactile Sensing ,Convolutional Neural Networks ,Robotic Grasping ,DiGIT Sensor ,European commission ,Telecommunications ,business ,Convolutional neural network ,Ingeniería de Sistemas y Automática - Abstract
Robotic manipulation continues being an unsolved problem. It involves many complex aspects, for example, perception tactile of different objects and materials, grasping control to plan the robotic hand pose, etc. Most of previous works on this topic used expensive sensors. This fact makes difficult the application in the industry. In this work, we propose a grip detection system using a low-cost visual-based tactile sensor known as DIGIT, mounted on a ROBOTIQ gripper 2F-140. We proved that a Deep Convolutional Network is able to detect contact or no contact. Capturing almost 12000 images with contact and no contact from different objects, we achieve 99% accuracy with never seen samples, in the best scenario. As a result, this system will allow us to implement a grasping controller for the gripper. Research work was completely funded by the European Commission and FEDER through the COMMANDIA project (SOE2/P1/F0638), supported by Interreg-V Sudoe. Computer facilities used were provided by Valencia Government and FEDER through the IDIFEDER/2020/003.
- Published
- 2021
- Full Text
- View/download PDF
23. OpenStreetMap-based Autonomous Navigation With LiDAR Naive-Valley-Path Obstacle Avoidance
- Author
-
Francisco A. Candelas-Herías, Fernando Torres, Miguel Ángel Muñoz-Bañón, Edison Velasco, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
FOS: Computer and information sciences ,Mechanical Engineering ,Systems and Control (eess.SY) ,Open street maps ,Obstacle avoidance ,Electrical Engineering and Systems Science - Systems and Control ,Computer Science Applications ,Computer Science - Robotics ,Unmanned ground vehicle ,Automotive Engineering ,LiDAR point cloud ,FOS: Electrical engineering, electronic engineering, information engineering ,Autonomous navigation ,Robotics (cs.RO) ,Path planning - Abstract
OpenStreetMaps (OSM) is currently studied as the environment representation for autonomous navigation. It provides advantages such as global consistency, a heavy-less map construction process, and a wide variety of road information publicly available. However, the location of this information is usually not very accurate locally. In this paper, we present a complete autonomous navigation pipeline using OSM information as environment representation for global planning. To avoid the flaw of local low-accuracy, we offer the novel LiDAR-based Naive-Valley-Path (NVP) method that exploits the concept of "valley" areas to infer the local path always furthest from obstacles. This behavior allows navigation always through the center of trafficable areas following the road's shape independently of OSM error. Furthermore, NVP is a naive method that is highly sample-time-efficient. This time efficiency also enables obstacle avoidance, even for dynamic objects. We demonstrate the system's robustness in our research platform BLUE, driving autonomously across the University of Alicante Scientific Park for more than 20 km with 0.24 meters of average error against the road's center with a 19.8 ms of average sample time. Our vehicle avoids static obstacles in the road and even dynamic ones, such as vehicles and pedestrians., Comment: This paper is in its second revision for publication at IEEE Transactions on Intelligent Transportation Systems (T-ITS)
- Published
- 2021
- Full Text
- View/download PDF
24. Towards footwear manufacturing 4.0: shoe sole robotic grasping in assembling operations
- Author
-
Jose F. Gomez, Guillermo Oliver, Pablo Gil, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Shoe soles ,Grasping ,Laser scanning ,Computer science ,Point cloud ,02 engineering and technology ,Footwear ,Industrial and Manufacturing Engineering ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Workcell ,Digitization ,business.industry ,Mechanical Engineering ,Robotics ,Manufacturing automation ,Automation ,Computer Science Applications ,Task (computing) ,Control and Systems Engineering ,Factory (object-oriented programming) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Software ,Ingeniería de Sistemas y Automática - Abstract
In this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards shoe industry 4.0. To achieve it, we have implemented a novel sole grasping method, compatible with soles of different shapes, sizes, and materials, by exploiting the particular characteristics of these objects. Our proposal is able to work well with low density point clouds from a single RGBD camera and also with dense point clouds obtained from a laser scanner digitizer. The method computes antipodal grasping points from visual data in both cases and it does not require a previous recognition of sole. It relies on sole contour extraction using concave hulls and measuring the curvature on contour areas. Our method was tested both in a simulated environment and in real conditions of manufacturing at INESCOP facilities, processing 20 soles with different sizes and characteristics. Grasps were performed in two different configurations, obtaining an average score of 97.5% of successful real grasps for soles without heel made with materials of low or medium flexibility. In both cases, the grasping method was tested without carrying out tactile control throughout the task. Research work was completely funded by the European Commission and FEDER through the COMMANDIA project (SOE2/P1/F0638), supported by Interreg-V Sudoe. Part of the facilities used were provided by the Footwear Technological Institute (INESCOP).
- Published
- 2021
25. Detección de agarre de objetos desconocidos con sensor visual-táctil
- Author
-
Santiago Timoteo Puente Méndez, Inés Fernández Sánchez, Pablo Gil, Julio Castaño Amorós, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Percepción táctil ,Agarre robótico ,DIGIT sensor ,media_common.quotation_subject ,Redes neuronales convolucionales ,Sensor DIGIT ,Convolutional neural networks ,Art ,Tactile perception ,Robotic grasping ,Humanities ,Ingeniería de Sistemas y Automática ,media_common - Abstract
La manipulación robótica sigue siendo un problema no resuelto. Implica muchos aspectos complejos como la percepción táctil de una amplia variedad de objetos y materiales, control de agarre para planificar la postura de la mano robótica, etc. La mayoría de los trabajos anteriores sobre este tema han estado utilizando sensores caros. Este hecho dificulta la aplicación en la industria. En este trabajo, se propone un sistema de detección de agarre mediante un sensor táctil de tecnología de imagen y bajo coste, conocido como DIGIT. El método desarrollado basado en redes convolucionales profundas es capaz de detectar contacto o no contacto, con precisiones superiores al 95%. El sistema ha sido entrenado y testado con una base de datos propia de más de 16000 imágenes procedentes de agarres de diferentes objetos, empleando distintas unidades de DIGIT. El método de detección forma parte de un controlador de agarre para una pinza ROBOTIQ 2F-140. Robotic manipulation is still a challenge. It involves many complex aspects such as tactile perception of a wide variety of objects and materials, grip control to plan robotic hand posture, etc. Most of the previous work used expensive sensors for tactile perception tasks. This fact implies difficulty in transferring application results to industry. In this work, a grip detection system is proposed. It uses DIGIT sensors based on low-cost image technology. The method developed, which is based on deep Convolutional Neural Networks (CNN), is capable of detecting contact or non-contact, with success rates greater than 95 %. The system has been trained and tested on our own dataset, composed of more than 16,000 images from different object grasping, also using several DIGIT units. The detection method is part of a grip controller used with a ROBOTIQ 2F-140 gripper. Este trabajo ha sido financiado con FEDER a través del proyecto europeo COMMANDIA (SOE2/P1/F0638) de la convocatoria Interreg-V Sudoe. Además, se ha hecho uso de instalaciones de computación DGX-A100 adquiridas con una ayuda IDIFEDER/2020/003 del gobierno regional de la Generalitat Valenciana.
- Published
- 2021
26. Prediction of Tactile Perception from Vision on Deformable Objects
- Author
-
Zapata-Impata, Brayan S., Gil, Pablo, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
Robotic manipulation ,Deformable objects ,Computer vision ,Robotics ,Tactile perception ,Robotic grasping ,Ingeniería de Sistemas y Automática - Abstract
Workshop on Robotic Manipulation of Deformable Objects (ROMADO) in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020) Through the use of tactile perception, a manipulator can estimate the stability of its grip, among others. However, tactile sensors are only activated upon contact. In contrast, humans can estimate the feeling of touching an object from its visual appearance. Providing robots with this ability to generate tactile perception from vision is desirable to achieve autonomy. To accomplish this, we propose using a Generative Adversarial Network. Our system learns to generate tactile responses using as stimulus a visual representation of the object and target grasping data. Since collecting labeled samples of robotic tactile responses consumes hardware resources and time, we apply semi-supervised techniques. For this work, we collected 4000 samples with 4 deformable items and experiment with 4 tactile modalities. This work was supported in part by the Spanish Government and the FEDER Funds [BES-2016-078290] and in part by the European Commission [COMMANDIA SOE2/P1/F0638], action supported by Interreg-V Sudoe.
- Published
- 2020
27. Robotic workcell for sole grasping in footwear manufacturing
- Author
-
Guillermo Oliver, Pablo Gil, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Flexibility (engineering) ,0209 industrial biotechnology ,Engineering drawing ,Shoe soles ,Grasping ,business.industry ,Computer science ,Process (computing) ,Conveyor belt ,Robotics ,02 engineering and technology ,Automation ,Footwear ,Manufacturing ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Workcell ,Artificial intelligence ,business ,Ingeniería de Sistemas y Automática - Abstract
The goal of this paper is to present a robotic workcell to automate several tasks of the cementing process in footwear manufacturing. Our cell's main applications are sole digitization of a wide variety of footwear, glue dispensing and sole grasping from conveyor belts. This cell is made up of a manipulator arm endowed with a gripper, a conveyor belt and a 3D scanner. We have integrated all the elements into a ROS simulation environment facilitating control and communication among them, also providing flexibility to support future extensions. We propose a novel method to grasp soles of different shape, size and material, exploiting the particular characteristics of these objects. Our method relies on object contour extraction using concave hulls. We evaluate it on point clouds of 16 digitized real soles in three different scenarios: concave hull, k-NNs extension and PCA correction. While we have tested this workcell in a simulated environment, the presented system's performance is scheduled to be tested on a real setup at INESCOP facilities in the upcoming months. Work funded by the European Commission and FEDER funds through the COMMANDIA project (SOE2/P1/F0638), supported by Interreg-V Sudoe.
- Published
- 2020
28. Generation of Tactile Data From 3D Vision and Target Robotic Grasps
- Author
-
Youcef Mezouar, Pablo Gil, Brayan S. Zapata-Impata, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Computer science ,3D vision ,02 engineering and technology ,Tactile perception ,020901 industrial engineering & automation ,3d vision ,Robotic Surgical Procedures ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Computer vision ,Vision, Ocular ,Tactile feedback estimation ,Modalities ,Hand Strength ,business.industry ,GRASP ,Robotics ,Robotic perception ,Computer Science Applications ,Visualization ,Tactitle data generation ,Human-Computer Interaction ,Touch ,Task analysis ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Tactile sensor ,Ingeniería de Sistemas y Automática - Abstract
Tactile perception is a rich source of information for robotic grasping: it allows a robot to identify a grasped object and assess the stability of a grasp, among other things. However, the tactile sensor must come into contact with the target object in order to produce readings. As a result, tactile data can only be attained if a real contact is made. We propose to overcome this restriction by employing a method that models the behaviour of a tactile sensor using 3D vision and grasp information as a stimulus. Our system regresses the quantified tactile response that would be experienced if this grasp were performed on the object. We experiment with 16 items and 4 tactile data modalities to show that our proposal learns this task with low error. This work was supported in part by the Spanish Government and the FEDER Funds (BES-2016-078290, PRX19/00289, RTI2018-094279-B-100) and in part by the European Commission (COMMANDIA SOE2/P1/F0638), action supported by Interreg-V Sudoe.
- Published
- 2020
29. Grasped object recognition with proprioceptive-tactile hybrid sensing
- Author
-
Brayan Stiven Zapata Impata, Pablo Gil, Edison Velasco Sánchez, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Proprioceptive and tactile learning ,Physics ,Robotic hand ,Aprendizaje propioceptivo y táctil ,Object recognition ,Dextrogyre manipulation ,Agarre de objetos ,Reconocimiento de objetos ,Reconocimiento propioceptivo y táctil ,Proprioceptive and tactile recognition ,Manipulación dextrógira ,Humanities ,Ingeniería de Sistemas y Automática ,Object grasping - Abstract
Este trabajo se presenta una aproximación híbrida propioceptiva-táctil para reconocer objetos agarrados. La información propioceptiva de una mano robótica es usada para estimar la geometría de contacto y así, distinguir la forma de cada uno de los objetos que están siendo agarrados. La geometría de contacto viene determinada por los datos articulares de la mano robótica cuando ésta lleva a cabo un agarre en configuración de cierre sobre la superficie del objeto. Además, la información táctil permite determinar propiedades de rigidez y flexibilidad del objeto agarrado, mejorando el proceso de reconocimiento cuando la geometría de contacto y por lo tanto, la forma de los objetos es similar. El método propuesto emplea técnicas de clasificación de aprendizaje supervisado para combinar los datos de ambos tipos de sensores e identificar el tipo de objeto con un porcentaje de acierto medio del 95,5% con métrica ‘accuracy’ y 95.3% con F1(F-score) aun en presencia de incertidumbre de medida y ambigüedad. Estas ratios de acierto se han alcanzado experimentando con 7 objetos domésticos y llevando a cabo más de 3000 agarres. This work presents a hybrid proprioceptive-tactile approach to recognize grasped objects. Proprioceptive data of a robotic hand are used to estimate contact geometry and thus, to distinguish the shape of each of the objects that are being grasped. The contact geometry is determined by the joint data of the robotic hand when it carries out a grip in closure grasps configuration on the object surface. In addition, the tactile data allow to robotic hand to determine rigidity and flexibility properties of the grasped object, improving the recognition process when the contact geometry and therefore, the shapes of different objects are similar. The proposed method employs supervised learning classification techniques to combine the data from both types of sensors and identify the type of object with an average success rate of 95,5% (with accuracy) and 95.3% (with F1 or F-score)even in the presence of measurement with uncertainty and ambiguity of pose. These success ratios have been achieved by experimenting with 7 different objects and performing more than 3000 grasps. Este trabajo ha sido financiado por el proyecto europeo COMMANDIA (SOE2/P1/F0638) que está cofinanciado por el programa Interreg-V Sudoe y el Fondo Europeo de Desarrollo Regional, así como por el proyecto nacional DPI2015-68087-R.
- Published
- 2020
- Full Text
- View/download PDF
30. Bimanual grasping of objects assisted by vision
- Author
-
Pablo Gil, Jorge Pomares, Brayan S. Zapata-Impata, John Alejandro Castro-Vargas, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, Automática, Robótica y Visión Artificial, and Human Robotics (HURO)
- Subjects
Nubes de puntos ,Computer science ,Agarre cooperativo ,Manipulación robótica ,Robotic manipulation ,Agarre bimanual ,Detection of grasping points ,Detección de puntos de agarre ,RGBD ,Bimanual grasping ,Point clouds ,Cooperative grasping ,Humanities ,Ingeniería de Sistemas y Automática - Abstract
Las tareas de manipulación de objetos, en ocasiones, requieren hacer uso de dos o más robots cooperando. En la industria 4.0 cada vez es más demandada la robótica asistencial por ejemplo para llevar a cabo tareas como levantar, arrastrar o empujar un bulto/paquete de elevado peso y/o de ciertas dimensiones. Por lo que cada vez es más frecuente encontrar robots con forma humana destinados a ayudar al operario humano en actividades en las que se dan este tipo de movimientos. En este artículo se presenta una plataforma robótica asistida por visión para llevar a cabo tareas de agarre y manipulación bimanual de objetos. La plataforma robótica consta de un torso metálico con articulación rotacional en la cadera y de dos manipuladores industriales, de 7 grados de libertad, que actúan como brazos y que a su vez montan en sus extremos sendas manos robóticas multidedo. Cada una de las extremidades superiores está dotada de percepción visual, ya que hacen uso de 2 sensores RGBD ubicados en configuración eye-to-hand. La plataforma ha sido empleada y testeada con éxito para llevar a cabo agarres bimanuales de objetos en aras de poder desarrollar tareas de manipulación cooperativa de manera coordinada entre ambas extremidades. Manipulation tasks of objects, sometimes, requires the use of two or more cooperating robots. In the industry 4.0, assistance robotic is being more and more demanded, for example, to carry out tasks such as lifting, dragging or pushing of both heavy and big packages. Consequently, it is possible to find robots with human appearance addressed on helping human operators in activities in which these types of movements occur. In this article, a vision-assisted robotic platform is presented to carry out both grasping tasks and bimanual manipulation of objects. The robotic platform consists of a metallic torso with rotational joint at the hip and two industrial manipulators, with 7 degrees of freedom, which act as arms. Each arm mounts a multifinger robotic hand at the end. Each of the upper extremities use visual perception from 3 RGBD sensors located in an eye-to-hand configuration. The platform has been successfully used and tested to carry out bimanual object grasping in order to develop cooperative manipulation tasks in a coordinated way between both robotic extremities. Este trabajo ha sido financiado por el proyecto europeo COMMANDIA (SOE2/P1/F0638) que está cofinanciado por el programa Interreg-V Sudoe y el Fondo Europeo de Desarrollo Regional, así como por el proyecto nacional DPI2015-68087-R.
- Published
- 2020
- Full Text
- View/download PDF
31. Assistance Robotics and Biosensors 2019
- Author
-
Fernando Torres, Andrés Úbeda, S. T. Puente, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Human Robotics (HURO), and Automática, Robótica y Visión Artificial
- Subjects
Engineering ,Advanced biomedical signal processing ,robotic prostheses ,assistance robotics applications ,02 engineering and technology ,Biosensing Techniques ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Analytical Chemistry ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:TP1-1185 ,Robotic prostheses ,electroencephalographic (EEG) sensors ,Electrical and Electronic Engineering ,Rehabilitation robotics ,Assistive robotics ,Robotic exoskeletons ,Instrumentation ,robotic exoskeletons ,business.industry ,Scientific progress ,Electromyography ,electromyographic (EMG) sensors ,010401 analytical chemistry ,Biomedical signal ,020206 networking & telecommunications ,Robotics ,Electroencephalography ,Prostheses and Implants ,advanced biomedical signal processing ,Exoskeleton Device ,Atomic and Molecular Physics, and Optics ,Electromyographic (EMG) sensors ,0104 chemical sciences ,Electroencephalographic (EEG) sensors ,Editorial ,Artificial intelligence ,business ,Assistance robotics applications ,Ingeniería de Sistemas y Automática - Abstract
This Special Issue is focused on breakthrough developments in the field of assistive and rehabilitation robotics. The selected contributions include current scientific progress from biomedical signal processing and cover applications to myoelectric prostheses, lower-limb and upper-limb exoskeletons and assistive robotics.
- Published
- 2020
32. In-hand recognition and manipulation of elastic objects using a servo-tactile control strategy
- Author
-
A. Delgado, Carlos A. Jara, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Scheme (programming language) ,0209 industrial biotechnology ,Grasping ,Computer science ,General Mathematics ,3D single-object recognition ,02 engineering and technology ,Industrial and Manufacturing Engineering ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Computer vision ,Control (linguistics) ,In-hand manipulation ,computer.programming_language ,business.industry ,Deformable object ,020207 software engineering ,Object (computer science) ,Computer Science Applications ,Task (computing) ,Control and Systems Engineering ,Object model ,Artificial intelligence ,business ,computer ,Software ,Servo ,Tactile servoing ,Ingeniería de Sistemas y Automática - Abstract
Grasping and manipulating objects with robotic hands depend largely on the features of the object to be used. Especially, features such as softness and deformability are crucial to take into account during the manipulation tasks. Indeed, positions of the fingers and forces to be applied by the robot hand when manipulating an object must be adapted to the caused deformation. For unknown objects, a previous recognition stage is usually needed to get the features of the object, and the manipulation strategies must be adapted depending on that recognition stage. To obtain a precise control in the manipulation task, a complex object model is usually needed and performed, for example using the Finite Element Method. However, these models require a complete discretization of the object and they are time-consuming for the performance of the manipulation tasks. For that reason, in this paper a new control strategy, based on a minimal spring model of the objects, is presented and used for the control of the robot hand. This paper also presents an adaptable tactile-servo control scheme that can be used in in-hand manipulation tasks of deformable objects. Tactile control is based on achieving and maintaining a force value at the contact points which changes according to the object softness, a feature estimated in an initial recognition stage. Research supported by Spanish Ministry of Economy, European FEDER funds, the Valencia Regional Government and University of Alicante, through projects DPI2012-32390, DPI2015-68087-R, PROMETEO/2013/085 and GRE 15-05.
- Published
- 2017
- Full Text
- View/download PDF
33. e-Health: Biomedical instrumentation with Arduino
- Author
-
Fernando Torres, Andrés Úbeda, S. T. Puente, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
Engineering ,Academic year ,Point (typography) ,business.industry ,Health information technology ,0206 medical engineering ,05 social sciences ,050301 education ,Library science ,02 engineering and technology ,Biomedical instrumentation ,020601 biomedical engineering ,Neurorobotics ,Task (project management) ,Engineering management ,Control and Systems Engineering ,Arduino ,ComputingMilieux_COMPUTERSANDEDUCATION ,Technology for health ,Instrumentation (computer programming) ,business ,0503 education ,Ingeniería de Sistemas y Automática - Abstract
This contribution describes the planning and the development of laboratory activities for an introduction to biomedical system instrumentation, as well as some experiences and results obtained from them. The activities have been applied in the course "Systems and Instruments Foundations", during the academic year 2016-17. This course is scheduled in the second year of the novel Health Information Technology Degree offered by the University of Alicante. Teaching biomedical instrumentation from the point of view of engineering to students that have little medical and engineering background is a complex task. Laboratory practices proposed are presented in this paper, which is based on Arduino and e-Health shield to teach biomedical concepts. A project-based learning methodology is used in the laboratory sessions, where students have to accomplish a project at the end of the semester. This work was supported by Valencia Regional Government through the research project PROMETEO/2013/085, and the Polytechnic School of the University of Alicante.
- Published
- 2017
- Full Text
- View/download PDF
34. Adaptive tactile control for in-hand manipulation tasks of deformable objects
- Author
-
A. Delgado, Carlos A. Jara, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Engineering ,Rigidity (psychology) ,02 engineering and technology ,Industrial and Manufacturing Engineering ,020901 industrial engineering & automation ,Computer vision ,Tactile sensing ,In-hand manipulation ,In-hand learning ,business.industry ,Mechanical Engineering ,05 social sciences ,GRASP ,Object (computer science) ,Rigid body ,Computer Science Applications ,Task (computing) ,Contact maintenance ,Action (philosophy) ,Control and Systems Engineering ,Robot ,Artificial intelligence ,0509 other social sciences ,050904 information & library sciences ,business ,Software ,Tactile sensor ,Ingeniería de Sistemas y Automática - Abstract
Tactile sensors are key components for a robot hand system, which are usually used to obtain the object’s features. The use of tactile sensors to obtain information from the objects is an open topic of research. In this paper, a new strategy for in-hand extraction of object’s properties and for control of the interaction forces with robot fingers, mainly based on tactile data, is presented. The scope of this strategy is to grasp and manipulate solid objects, including rigid and soft bodies. Assuming that the hand is in an initial configuration in which the object is grasped, the properties’ extraction approach is executed. After the extraction of properties is finished, the object can be classified in regard to a general body listing: rigid body, soft elastic body, or soft plastic object. Once the object is classified, for in-hand manipulation tasks, the contact points between the object grasped and the fingers are maintained using the information given by the tactile sensors in order to perform manipulation tasks. Each task is defined by a sequence of basic actions, in which the contact points and applied forces are adapted depending on the action to be performed and the estimated features for the object. The presented approach tries to imitate the behavior of human beings, in which the applied forces by the fingers are changed when the human estimates the rigidity of a body and when the fingers react to unexpected movements of the object to keep the contact points. This research is supported by the Spanish Ministry of Economy, European FEDER funds, the Valencia Regional Government, and University of Alicante, through projects DPI2015-68087-R, PROMETEO/2013/085, and GRE 15-05.
- Published
- 2017
- Full Text
- View/download PDF
35. Visual Completion Of 3D Object Shapes From A Single View For Robotic Tasks
- Author
-
Youcef Mezouar, Juan-Antonio Corrales-Ramon, Carlos M. Mateo, Pablo Gil, Mohamed Tahoun, Omar Tahri, Institut National des Sciences Appliquées - Centre Val de Loire (INSA CVL), Institut National des Sciences Appliquées (INSA), Institut Pascal (IP), SIGMA Clermont (SIGMA Clermont)-Université Clermont Auvergne [2017-2020] (UCA [2017-2020])-Centre National de la Recherche Scientifique (CNRS), Universidad de Alicante, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Visual perception ,Computer science ,3D Vision ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,stereo image processing ,Convolutional neural network ,object recognition ,Image (mathematics) ,[SPI.AUTO]Engineering Sciences [physics]/Automatic ,Object shape prediction ,convolutional neural nets ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Computer vision ,manipulators ,shape recognition ,single-view ,business.industry ,Deep learning ,Cognitive neuroscience of visual object recognition ,020207 software engineering ,Robotics ,3D object shape recognition ,Object (computer science) ,robot vision ,image-based robotic manipulation tasks ,configuration eye-in-hand ,3D Deep Convolutional Neural Network ,Robot ,020201 artificial intelligence & image processing ,learning (artificial intelligence) ,Artificial intelligence ,visual completion ,business ,manipulator robots ,CNN ,Ingeniería de Sistemas y Automática - Abstract
International audience; The goal of this paper is to predict 3D object shape to improve the visual perception of robots in grasping and manipulation tasks. The planning of image-based robotic manipulation tasks depends on the recognition of the object's shape. Mostly, the manipulator robots usually use a camera with configuration eye-in-hand. This fact limits the calculation of the grip on the visible part of the object. In this paper, we present a 3D Deep Convolutional Neural Network to predict the hidden parts of objects from a single-view and to accomplish recovering the complete shape of them. We have tested our proposal with both previously seen objects and novel objects from a well-known dataset.
- Published
- 2019
- Full Text
- View/download PDF
36. Tactile-Driven Grasp Stability and Slip Prediction
- Author
-
Pablo Gil, Fernando Torres, Brayan S. Zapata-Impata, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Control and Optimization ,Computer science ,Process (engineering) ,lcsh:Mechanical engineering and machinery ,Stability (learning theory) ,Tactile perception ,02 engineering and technology ,Robotic grasping ,Slip detection ,01 natural sciences ,Intelligent manipulation ,Contact force ,020901 industrial engineering & automation ,Artificial Intelligence ,Control theory ,slip detection ,tactile perception ,stability detection ,lcsh:TJ1-1570 ,intelligent manipulation ,Slip (vehicle dynamics) ,robotic grasping ,Stability detection ,Mechanical Engineering ,010401 analytical chemistry ,GRASP ,Work (physics) ,0104 chemical sciences ,Slippage ,Ingeniería de Sistemas y Automática - Abstract
One of the challenges in robotic grasping tasks is the problem of detecting whether a grip is stable or not. The lack of stability during a manipulation operation usually causes the slippage of the grasped object due to poor contact forces. Frequently, an unstable grip can be caused by an inadequate pose of the robotic hand or by insufficient contact pressure, or both. The use of tactile data is essential to check such conditions and, therefore, predict the stability of a grasp. In this work, we present and compare different methodologies based on deep learning in order to represent and process tactile data for both stability and slip prediction. Work funded by the Spanish Ministries of Economy, Industry and Competitiveness and Science, Innovation and Universities through the grant BES-2016-078290 and the project RTI2018-094279-B-100, respectively, as well as the European Commission and FEDER funds through the COMMANDIA project (SOE2/P1/F0638), action supported by Interreg-V Sudoe.
- Published
- 2019
- Full Text
- View/download PDF
37. Virtualization of Robotic Hands Using Mobile Devices †
- Author
-
S. T. Puente, Fernando Torres, and Francisco A. Candelas, Lucía Más, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Control and Optimization ,Computer science ,lcsh:Mechanical engineering and machinery ,robotic manipulation ,02 engineering and technology ,Allegro Hand ,computer.software_genre ,telerobotics ,Unity 3D ,Human–robot interaction ,Virtual reality ,020901 industrial engineering & automation ,Artificial Intelligence ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,human–robot interaction ,lcsh:TJ1-1570 ,Server-side ,Shadow Dexterous Hand ,Telerobotics ,business.industry ,Mechanical Engineering ,020208 electrical & electronic engineering ,Usability ,ROS ,Virtualization ,Robotic manipulation ,Shadow Hand ,User control ,virtual reality ,business ,computer ,Mobile device ,Ingeniería de Sistemas y Automática - Abstract
This article presents a multiplatform application for the tele-operation of a robot hand using virtualization in Unity 3D. This approach grants usability to users that need to control a robotic hand, allowing supervision in a collaborative way. This paper focuses on a user application designed for the 3D virtualization of a robotic hand and the tele-operation architecture. The designed system allows for the simulation of any robotic hand. It has been tested with the virtualization of the four-fingered Allegro Hand of SimLab with 16 degrees of freedom, and the Shadow hand with 24 degrees of freedom. The system allows for the control of the position of each finger by means of joint and Cartesian co-ordinates. All user control interfaces are designed using Unity 3D, such that a multiplatform philosophy is achieved. The server side allows the user application to connect to a ROS (Robot Operating System) server through a TCP/IP socket, to control a real hand or to share a simulation of it among several users. If a real robot hand is used, real-time control and feedback of all the joints of the hand is communicated to the set of users. Finally, the system has been tested with a set of users with satisfactory results. This research was funded by Ministerio de Ciencia, Innovación y Universidades grant number RTI2018-094279-B-100.
- Published
- 2019
- Full Text
- View/download PDF
38. vision2tactile: Feeling Touch by Sight
- Author
-
Zapata-Impata, Brayan S., Gil, Pablo, Torres, Fernando, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Robotic manipulation ,Tactile perception for robots ,Sim to real for robotic grasping ,Grasping Learning from simulation ,Ingeniería de Sistemas y Automática - Abstract
Submitted to the Workshop on Closing the Reality Gap in Sim2real Transfer for Robotic Manipulation, Freiburg, Germany, June 23, 2019. Latest trends in robotic grasping combine vision and touch for improving the performance of systems at tasks like stability prediction. However, tactile data are only available during the grasp, limiting the set of scenarios in which multimodal solutions can be applied. Could we obtain it prior to grasping? We explore the use of visual perception as a stimulus for generating tactile data so the robotic system can ”feel” the response of the tactile perception just by looking at the object. Work funded by the Spanish Government and the FEDER Funds (BES-2016-078290, RTI2018-094279-B-100) as well as by the European Commission (COMMANDIA SOE2/P1/F0638), action supported by Interreg-V Sudoe.
- Published
- 2019
39. Predicción de la Estabilidad en Tareas de Agarre Robótico con Información Táctil
- Author
-
Zapata-Impata, Brayan S., Gil, Pablo, Torres, Fernando, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
Percepción táctil ,Agarre robótico ,Aprendizaje ,Estabilidad de agarre ,Manipuladores robóticos ,Manipulación inteligente ,Ingeniería de Sistemas y Automática - Abstract
En tareas de manipulación robótica es de especial interés detectar si un agarre es estable o por el contrario, el objeto agarrado se desliza entre los dedos debido a un contacto inadecuado. Con frecuencia, la inestabilidad en el agarre puede ser como consecuencia de una mala pose de la mano o pinza robótica durante su ejecución y o una presión de contacto insuficiente a la hora de ejercer la tarea. El empleo de información táctil y la representación de ésta es vital para llevar a cabo la predicción de estabilidad en el agarre. En este trabajo, se presentan y comparan distintas metodologías para representar la información táctil, así como los métodos de aprendizaje más adecuados en función de la representación táctil escogida. Este trabajo ha sido financiado con Fondos Europeos de Desarrollo Regional (FEDER), Ministerio de Economía, Industria y Competitividad a través del proyecto RTI2018-094279-B-100 y la ayuda predoctoral BES-2016-078290, y también gracias al apoyo de la Comisión Europea y del programa Interreg V. Sudoe a través del proyecto SOE2/P1/F0638.
- Published
- 2019
40. Semantic Segmentation of SLAR Imagery with Convolutional LSTM Selectional AutoEncoders
- Author
-
Pablo Gil, Antonio-Javier Gallego, Antonio Pertusa, Robert B. Fisher, Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Reconocimiento de Formas e Inteligencia Artificial, and Automática, Robótica y Visión Artificial
- Subjects
010504 meteorology & atmospheric sciences ,Coast detection ,Computer science ,ship detection ,Science ,0211 other engineering and technologies ,side-looking airborne radar ,oil spills ,coast detection ,neural networks ,supervised learning ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,02 engineering and technology ,01 natural sciences ,law.invention ,law ,Side-looking airborne radar ,Segmentation ,Radar ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Learning classifier system ,Artificial neural network ,business.industry ,Supervised learning ,Process (computing) ,Response time ,Oil spills ,Side looking airborne radar ,Pattern recognition ,Lenguajes y Sistemas Informáticos ,General Earth and Planetary Sciences ,Artificial intelligence ,business ,Ship detection ,Neural networks ,Ingeniería de Sistemas y Automática - Abstract
We present a method to detect maritime oil spills from Side-Looking Airborne Radar (SLAR) sensors mounted on aircraft in order to enable a quick response of emergency services when an oil spill occurs. The proposed approach introduces a new type of neural architecture named Convolutional Long Short Term Memory Selectional AutoEncoders (CMSAE) which allows the simultaneous segmentation of multiple classes such as coast, oil spill and ships. Unlike previous works using full SLAR images, in this work only a few scanlines from the beam-scanning of radar are needed to perform the detection. The main objective is to develop a method that performs accurate segmentation using only the current and previous sensor information, in order to return a real-time response during the flight. The proposed architecture uses a series of CMSAE networks to process in parallel each of the objectives defined as different classes. The output of these networks are given to a machine learning classifier to perform the final detection. Results show that the proposed approach can reliably detect oil spills and other maritime objects in SLAR sequences, outperforming the accuracy of previous state-of-the-art methods and with a response time of only 0.76 s. This research was funded by both the Spanish Government’s Ministry of Economy, Industry and Competitiveness, European Regional Development Funds and Babcock MCS Spain through the RTC-2014-1863-8 and INAER4-14Y(IDI-20141234) projects.
- Published
- 2019
- Full Text
- View/download PDF
41. FPGA-based architecture for direct visual control robotic systems
- Author
-
Aiman Alabdo, Javier Pérez, Jorge Pomares, Fernando Torres, Gabriel J. Garcia, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Computer science ,02 engineering and technology ,Visual servoing ,Visual control ,Parallel architectures ,020901 industrial engineering & automation ,Reconfigurable architectures ,Control theory ,Electrical and Electronic Engineering ,Field-programmable gate array ,Simulation ,Robot control ,Mechanical Engineering ,Response time ,Control engineering ,021001 nanoscience & nanotechnology ,Computer Science Applications ,Embedded software ,Control and Systems Engineering ,Robot ,0210 nano-technology ,Ingeniería de Sistemas y Automática - Abstract
This paper describes a whole new framework to quickly develop dynamic visual servoing systems embedded in an FPGA. Parallel design of algorithms increases the precision of this kind of controller, while minimizing the response time. Additionally, a control framework to dynamically visual control robot arms is proposed. The direct image-based visual controllers derived from the framework allow for tracking trajectories obtaining different dynamic behaviors depending on a weighting matrix. A new method to compensate the chaos behavior of this kind of system is also included in the proposed framework. To exemplify the feasibility of the FPGA-based proposed framework, and to demonstrate the effects of the metrics, some of the derived controllers are evaluated during the tracking of image trajectories. This research is supported by the Spanish Ministry of Economy, the European FEDER funds, the Valencia Regional Government and the University of Alicante through the research projects DPI2015- 68087-R, PROMETEO/2013/085 and GRE12-17.
- Published
- 2016
- Full Text
- View/download PDF
42. Direct image-based visual servoing of free-floating space manipulators
- Author
-
Javier Pérez Alepuz, M. Reza Emami, Jorge Pomares, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Engineering ,Aerospace Engineering ,02 engineering and technology ,Visual servoing ,01 natural sciences ,Visual control ,010305 fluids & plasmas ,Computer Science::Robotics ,Attitude control ,Space manipulator ,020901 industrial engineering & automation ,Control theory ,0103 physical sciences ,Model-based control ,Computer vision ,business.industry ,Mobile manipulator ,Parallel manipulator ,Free-floating manipulator ,Trajectory ,Robot ,Artificial intelligence ,business ,Ingeniería de Sistemas y Automática - Abstract
This paper presents an image-based controller to perform the guidance of a free-floating robot manipulator. The manipulator has an eye-in-hand camera system, and is attached to a base satellite. The base is completely free and floating in space with no attitude control, and thus, freely reacting to the movements of the robot manipulator attached to it. The proposed image-based approach uses the system's kinematics and dynamics model, not only to achieve a desired location with respect to an observed object in space, but also to follow a desired trajectory with respect to the object. To do this, the paper presents an optimal control approach to guiding the free-floating satellite-mounted robot, using visual information and considering the optimization of the motor commands with respect to a specified metric along with chaos compensation. The proposed controller is applied to the visual control of a four-degree-of-freedom robot manipulator in different scenarios. Research supported by the Spanish Ministry of Economy through the research project DPI2015-68087-R.
- Published
- 2016
- Full Text
- View/download PDF
43. Design and application of an immersive virtual reality system to enhance emotional skills for children with autism spectrum disorders
- Author
-
Asunción Lledó, Gonzalo Lorenzo, Rosabel Martínez Roig, Jorge Pomares, Universidad de Alicante. Departamento de Psicología Evolutiva y Didáctica, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Departamento de Didáctica General y Didácticas Específicas, EDUTIC- ADEI (Educación y Tecnologías de la Información y Comunicación- Atención a la Diversidad. Escuela Inclusiva), and Automática, Robótica y Visión Artificial
- Subjects
General Computer Science ,Applied psychology ,Virtual reality ,computer.software_genre ,Education ,Social skills ,Didáctica y Organización Escolar ,ComputingMilieux_COMPUTERSANDEDUCATION ,medicine ,0501 psychology and cognitive sciences ,Multimedia ,05 social sciences ,050301 education ,Autism spectrum disorders ,medicine.disease ,Social situation ,Mood ,Autism spectrum disorder ,Autism ,Computer vision ,Psychology ,Emotional skills ,0503 education ,computer ,Ingeniería de Sistemas y Automática ,050104 developmental & child psychology - Abstract
This paper proposes the design and application of an immersive virtual reality system to improve and train the emotional skills of students with autism spectrum disorders. It has been designed for primary school students between the ages of 7-12 and all participants have a confirmed diagnosis of autism spectrum disorder. The immersive environment allows the student to train and develop different social situations in a structured, visual and continuous manner. The use of a computer vision system to automatically determine the child's emotional state is proposed. This system has been created with two goals in mind, the first to update the social situations, with the student's emotional mood taken into account, and the second to confirm, automatically, if the child's behavior is appropriate in the represented social situation. The results described in this paper show a significant improvement in the children's emotional competences, in comparison with the results obtained until now using earlier virtual reality systems. Virtual reality to improve the emotional skills of ASD students.Immersive virtual reality system to create social situations where the students can practice their emotional responses.Design and implementation of protocols to evaluate the students' emotional response.Identify, develop and train appropriate emotional behaviors of ASD students.Immersive virtual reality system to create social situations to improve their emotional skills.
- Published
- 2016
- Full Text
- View/download PDF
44. TactileGCN: A Graph Convolutional Network forPredicting Grasp Stability with Tactile Sensors
- Author
-
Garcia-Garcia, Alberto, Zapata-Impata, Brayan S., Orts-Escolano, Sergio, Gil, Pablo, Garcia-Rodriguez, Jose, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificial, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Universidad de Alicante. Instituto Universitario de Investigación Informática, Automática, Robótica y Visión Artificial, Robótica y Visión Tridimensional (RoViT), and Informática Industrial y Redes de Computadores
- Subjects
Artificial Intelligence ,Tactile image ,Ciencia de la Computación e Inteligencia Artificial ,Tactile detection ,Graph Neural Network ,Tactile sensing ,GCN ,Arquitectura y Tecnología de Computadores ,Prediction of grasp stability ,Ingeniería de Sistemas y Automática - Abstract
Tactile sensors provide useful contact data during the interaction with an object which can be used to accurately learn to determine the stability of a grasp. Most of the works in the literature represented tactile readings as plain feature vectors or matrix-like tactile images, using them to train machine learning models. In this work, we explore an alternative way of exploiting tactile information to predict grasp stability by leveraging graph-like representations of tactile data, which preserve the actual spatial arrangement of the sensor's taxels and their locality. In experimentation, we trained a Graph Neural Network to binary classify grasps as stable or slippery ones. To train such network and prove its predictive capabilities for the problem at hand, we captured a novel dataset of approximately 5000 three-fingered grasps across 41 objects for training and 1000 grasps with 10 unknown objects for testing. Our experiments prove that this novel approach can be effectively used to predict grasp stability. This work has been funded by the Spanish Government with Feder funds (TIN2016-76515-R and DPI2015-68087-R), by two grants for PhD studies (FPU15/04516 and BES-2016-07829), by regional projects (GV/2018/022 and GRE16-19) and by the European Commission (COMMANDIA SOE2/P1/F0638), action supported by Interreg-V Sudoe.
- Published
- 2019
45. Precise Ship Location With CNN Filter Selection From Optical Aerial Images
- Author
-
Antonio Pertusa, Pablo Gil, Antonio-Javier Gallego, Samer Alashhab, Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, Reconocimiento de Formas e Inteligencia Artificial, and Automática, Robótica y Visión Artificial
- Subjects
General Computer Science ,Computer science ,Object detection ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Convolutional neural network ,learning systems ,remote sensing ,General Materials Science ,Satellite imagery ,Categorical variable ,Artificial neural network ,Artificial neural networks ,Learning systems ,business.industry ,General Engineering ,Pattern recognition ,object detection ,Filter (signal processing) ,Remote sensing ,Thresholding ,Lenguajes y Sistemas Informáticos ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 ,Ingeniería de Sistemas y Automática - Abstract
This paper presents a method that can be used for the efficient detection of small maritime objects. The proposed method employs aerial images in the visible spectrum as inputs to train a categorical convolutional neural network for the classification of ships. A subset of those filters that make the greatest contribution to the classification of the target class is selected from the inner layers of the CNN. The gradients with respect to the input image are then calculated on these filters, which are subsequently normalized and combined. Thresholding and a morphological operation are then applied in order to eventually obtain the localization. One of the advantages of the proposed approach with regard to previous object detection methods is that it is only required to label a few images with bounding boxes of the targets to be trained for localization. The method was evaluated with an extended version of the MASATI (MAritime SATellite Imagery) dataset. This new dataset has more than 7 000 images, 4 157 of which contain ships. Using only 14 training images, the proposed approach achieves better results for small targets than other well-known object detection methods, which also require many more training images. This work was supported in part by the Spanish Government's Ministry of Economy, Industry, and Competitiveness under Project RTC-2014-1863-8, and in part by the Babcock MCS Spain under Project INAER4-14Y (IDI-20141234).
- Published
- 2019
46. 3DCNN Performance in Hand Gesture Recognition Applied to Robot Arm Interaction
- Author
-
John Alejandro Castro-Vargas, Pablo Gil, Jose Garcia-Rodriguez, Brayan S. Zapata-Impata, Fernando Torres, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Universidad de Alicante. Instituto Universitario de Investigación Informática, Automática, Robótica y Visión Artificial, and Informática Industrial y Redes de Computadores
- Subjects
0209 industrial biotechnology ,Government ,Gesture Recognition from Video ,3D Convolutional Neural Network ,02 engineering and technology ,Public administration ,Interaction human-robot ,020901 industrial engineering & automation ,Work (electrical) ,Action (philosophy) ,Gesture recognition ,Political science ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,European commission ,Christian ministry ,Arquitectura y Tecnología de Computadores ,Robotic arm ,Ingeniería de Sistemas y Automática - Abstract
In the past, methods for hand sign recognition have been successfully tested in Human Robot Interaction (HRI) using traditional methodologies based on static image features and machine learning. However, the recognition of gestures in video sequences is a problem still open, because current detection methods achieve low scores when the background is undefined or in unstructured scenarios. Deep learning techniques are being applied to approach a solution for this problem in recent years. In this paper, we present a study in which we analyse the performance of a 3DCNN architecture for hand gesture recognition in an unstructured scenario. The system yields a score of 73% in both accuracy and F1. The aim of the work is the implementation of a system for commanding robots with gestures recorded by video in real scenarios. This work was funded by the Ministry of Economy, Industry and Competitiveness from the Spanish Government through the DPI2015-68087-R and the pre-doctoral grant BES-2016-078290, by the European Commission and FEDER funds through the project COMMANDIA (SOE2/P1/F0638), action supported by Interreg-V Sudoe.
- Published
- 2019
- Full Text
- View/download PDF
47. TactileGCN: A Graph Convolutional Network for Predicting Grasp Stability with Tactile Sensors
- Author
-
Brayan S. Zapata-Impata, Sergio Orts-Escolano, Pablo Gil, Jose Garcia-Rodriguez, Alberto Garcia-Garcia, Universidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificial, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Robótica y Visión Tridimensional (RoViT), Automática, Robótica y Visión Artificial, and Arquitecturas Inteligentes Aplicadas (AIA)
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Grasping ,Computer science ,Machine Learning (stat.ML) ,02 engineering and technology ,01 natural sciences ,Machine Learning (cs.LG) ,Computer Science - Robotics ,Deep Learning ,Statistics - Machine Learning ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Graph Neural Networks ,Tactile Sensors ,business.industry ,Deep learning ,010401 analytical chemistry ,GRASP ,020206 networking & telecommunications ,Robotics ,Ciencia de la Computación e Inteligencia Artificial ,0104 chemical sciences ,Grasp Stability ,Graph (abstract data type) ,Artificial intelligence ,business ,Robotics (cs.RO) ,Arquitectura y Tecnología de Computadores ,Tactile sensor ,Ingeniería de Sistemas y Automática - Abstract
Tactile sensors provide useful contact data during the interaction with an object which can be used to accurately learn to determine the stability of a grasp. Most of the works in the literature represented tactile readings as plain feature vectors or matrix-like tactile images, using them to train machine learning models. In this work, we explore an alternative way of exploiting tactile information to predict grasp stability by leveraging graph-like representations of tactile data, which preserve the actual spatial arrangement of the sensor's taxels and their locality. In experimentation, we trained a Graph Neural Network to binary classify grasps as stable or slippery ones. To train such network and prove its predictive capabilities for the problem at hand, we captured a novel dataset of ~ 5000 three-fingered grasps across 41 objects for training and 1000 grasps with 10 unknown objects for testing. Our experiments prove that this novel approach can be effectively used to predict grasp stability. This work has been funded by the Spanish Government with Feder funds (TIN2016-76515-R and DPI2015-68087-R), by two grants for PhD studies (FPU15/04516 and BES-2016-07829), by regional projects (GV/2018/022 and GRE16-19) and by the European Commission (COMMANDIA SOE2/P1/F0638), action supported by Interreg-V Sudoe.
- Published
- 2019
48. Learning Spatio Temporal Tactile Features with a ConvLSTM for the Direction Of Slip Detection
- Author
-
Pablo Gil, Fernando Torres, Brayan S. Zapata-Impata, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Computer science ,Robot manipulator ,02 engineering and technology ,Slip (materials science) ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Article ,Analytical Chemistry ,020901 industrial engineering & automation ,direction of slip ,Computer vision ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,Spatio-temporal feature learning ,Instrumentation ,Slipping ,business.industry ,tactile processing ,Deep learning ,010401 analytical chemistry ,GRASP ,deep learning ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,Direction of slip ,spatio-temporal feature learning ,Tactile processing ,Artificial intelligence ,business ,Tactile sensor ,Ingeniería de Sistemas y Automática - Abstract
Robotic manipulators have to constantly deal with the complex task of detecting whether a grasp is stable or, in contrast, whether the grasped object is slipping. Recognising the type of slippage&mdash, translational, rotational&mdash, and its direction is more challenging than detecting only stability, but is simultaneously of greater use as regards correcting the aforementioned grasping issues. In this work, we propose a learning methodology for detecting the direction of a slip (seven categories) using spatio-temporal tactile features learnt from one tactile sensor. Tactile readings are, therefore, pre-processed and fed to a ConvLSTM that learns to detect these directions with just 50 ms of data. We have extensively evaluated the performance of the system and have achieved relatively high results at the detection of the direction of slip on unseen objects with familiar properties (82.56% accuracy).
- Published
- 2019
49. Diseño de un Sistema de Aprendizaje Basado en Proyecto para el Máster Universitario en Automática y Robótica
- Author
-
Jara, Carlos A., Pomares, Jorge, Garcia, Gabriel J., Ramón, José L., López Martí, José David, Úbeda, Andrés, Martínez Maciá, Juan, Márquez, Andrés, Neipp, Cristian, Blanes Payá, María José, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Human Robotics (HURO), Automática, Robótica y Visión Artificial, and Holografía y Procesado Óptico
- Subjects
ABP ,Física Aplicada ,Experiencias prácticas ,Robótica ,Ingeniería de Sistemas y Automática - Abstract
El ABP (Aprendizaje Basado en Proyecto) es una metodología didáctica en la que el estudiante aprende los conceptos de una materia mediante la realización de un proyecto o resolución de un problema adecuadamente diseñado y formulado por el profesor. Diversos estudios muestran que el ABP fomenta habilidades muy importantes, tales como el trabajo en grupo, el aprendizaje autónomo, la planificación del tiempo, el trabajo por proyectos o la capacidad de expresión oral y escrita, y mejora la motivación del estudiante, lo que se traduce en un mejor rendimiento académico y una mayor persistencia en el estudio. Este tipo de experiencias pueden ser beneficiosas tanto para el alumnado, que desarrolla nuevas habilidades que se acaban de mencionar, como para el profesor, que debe adaptarse a las nuevas exigencias tanto del entorno académico que plantea la adaptación al EEES como del mercado de trabajo. En este trabajo se pretende utilizar la metodología ABP en cada una de las asignaturas relacionadas con la robótica dentro del Máster Universitario en Automática y Robótica, para diseñar, construir y programar un robot a lo largo del Máster. Cada una de las asignaturas implicadas en la red definirá cómo participará dentro del proyecto global. El alumno podrá adquirir las competencias de cada asignatura al mismo tiempo que trabaja en un proyecto de gran envergadura en el que se obtiene finalmente un robot real.
- Published
- 2019
50. Detection of bodies in maritime rescue operations using Unmanned Aerial Vehicles with multispectral cameras
- Author
-
Pablo Gil, Antonio-Javier Gallego, Robert B. Fisher, Antonio Pertusa, Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos, Universidad de Alicante. Departamento de Física, Ingeniería de Sistemas y Teoría de la Señal, Universidad de Alicante. Instituto Universitario de Investigación Informática, Reconocimiento de Formas e Inteligencia Artificial, and Automática, Robótica y Visión Artificial
- Subjects
0209 industrial biotechnology ,Government ,Horizon (archaeology) ,business.industry ,Environmental resource management ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Emergency response ,Environmental monitoring ,02 engineering and technology ,Computer Science Applications ,020901 industrial engineering & automation ,Aerial robotics ,Control and Systems Engineering ,Lenguajes y Sistemas Informáticos ,0202 electrical engineering, electronic engineering, information engineering ,Learning ,Perception ,020201 artificial intelligence & image processing ,Christian ministry ,Business ,Ingeniería de Sistemas y Automática - Abstract
In this study, we use unmanned aerial vehicles equipped with multispectral cameras to search for bodies in maritime rescue operations. A series of flights were performed in open‐water scenarios in the northwest of Spain, using a certified aquatic rescue dummy in dangerous areas and real people when the weather conditions allowed it. The multispectral images were aligned and used to train a convolutional neural network for body detection. An exhaustive evaluation was performed to assess the best combination of spectral channels for this task. Three approaches based on a MobileNet topology were evaluated, using (a) the full image, (b) a sliding window, and (c) a precise localization method. The first method classifies an input image as containing a body or not, the second uses a sliding window to yield a class for each subimage, and the third uses transposed convolutions returning a binary output in which the body pixels are marked. In all cases, the MobileNet architecture was modified by adding custom layers and preprocessing the input to align the multispectral camera channels. Evaluation shows that the proposed methods yield reliable results, obtaining the best classification performance when combining green, red‐edge, and near‐infrared channels. We conclude that the precise localization approach is the most suitable method, obtaining a similar accuracy as the sliding window but achieving a spatial localization close to 1 m. The presented system is about to be implemented for real maritime rescue operations carried out by Babcock Mission Critical Services Spain. This study was performed in collaboration with BabcockMCS Spain and funded by the Galicia Region Government through the Civil UAVs Initiative program, the Spanish Government’s Ministry of Economy, Industry, and Competitiveness through the RTC‐2014‐1863‐8 and INAER4‐14Y (IDI‐20141234) projects, and the grant number 730897 under the HPC‐EUROPA3 project supported by Horizon 2020.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.