Back to Search Start Over

Graph-based feature learning for neuromorphic vision sensing

Authors :
Bi, Yin
Publication Year :
2020
Publisher :
University College London (University of London), 2020.

Abstract

Neuromorphic vision sensing (NVS) devices represent visual information as sequences of asynchronous discrete events (a.k.a., ’spikes’) in response to changes in scene reflectance. Unlike conventional active pixel sensing (APS), NVS allows for significantly higher event sampling rates at substantially increased energy efficiency and robustness to illumination changes. However, neuromorphic vision sensing comes with two key challenges: (i) the lack of large-scale annotated datasets to train advanced machine learning frameworks with; (ii) feature representation for NVS is far behind that of APS-based counterparts, resulting in lower accuracy for high-level computer vision tasks. In this thesis, we attempt to bridge these gaps by firstly proposing an NVS emulation framework, termed as PIX2NVS, that converts frames from APS videos to emulated neuromorphic spike events so that we can generate large annotated NVS data from existing video frame collections (e.g., UCF101, YouTube-8M, YFCC 100m, etc.) used in machine learning research. We evaluate PIX2NVS with three proposed distance metrics and test the emulated data on two recognition applications. Furthermore, given the sparse and asynchronous nature of NVS, we propose a compact graph representation for NVS, which allows for end-to-end learning with graph convolutional neural networks. We couple this with a novel end-to-end feature learning framework that accommodates both appearance-based and motionbased tasks. The core of our framework comprises a spatial feature learning module, which utilizes our proposed residual-graph CNN (RG-CNN), for end-to-end learning of appearance-based features directly from graphs. We extend this with our proposed Graph2Grid block and temporal feature learning module in order to efficiently model temporal dependencies over multiple graphs and allow for long temporal extent. We show that performance of this framework generalizes to object classification, action recognition, action similarity labeling and scene recognition, with state-of-the-art results. Importantly, our framework preserves the spatial and temporal coherence of spike events, while requiring less computation and memory. Finally, to address the absence of large real-world NVS datasets for complex recognition tasks, we introduce, evaluate and make available a 100k dataset of NVS recordings of the American Sign Language letters (ASL-DVS) acquired with an iniLabs DAVIS240c device under real-world conditions, as well as three neuromorphic human action dataset (UCF101-DVS, HMDB51-DVS and ASLAN-DVS) and one scene recognition dataset (YUPENN-DVS) recorded by the DAVIS240c capturing their screen playback reflectance.

Subjects

Subjects :
621.3

Details

Language :
English
Database :
British Library EThOS
Publication Type :
Dissertation/ Thesis
Accession number :
edsble.819826
Document Type :
Electronic Thesis or Dissertation