Back to Search
Start Over
End-to-end Learning of Object Motion Estimation from Retinal Events for Event-based Object Tracking
- Source :
- Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI '20). 2020, New York, USA. AAAI, New York, NY, USA
- Publication Year :
- 2020
-
Abstract
- Event cameras, which are asynchronous bio-inspired vision sensors, have shown great potential in computer vision and artificial intelligence. However, the application of event cameras to object-level motion estimation or tracking is still in its infancy. The main idea behind this work is to propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking. To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay (TSLTD) representation, which effectively encodes the spatio-temporal information of asynchronous retinal events into TSLTD frames with clear motion patterns. We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) to perform an end-to-end 5-DoF object motion regression. Our method is compared with state-of-the-art object tracking methods, that are based on conventional cameras or event cameras. The experimental results show the superiority of our method in handling various challenging environments such as fast motion and low illumination conditions.<br />Comment: 9 pages, 3 figures
- Subjects :
- Computer Science - Computer Vision and Pattern Recognition
Subjects
Details
- Database :
- arXiv
- Journal :
- Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI '20). 2020, New York, USA. AAAI, New York, NY, USA
- Publication Type :
- Report
- Accession number :
- edsarx.2002.05911
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1609/aaai.v34i07.6625