Back to Search Start Over

Visually-guided adaptive robot (ViGuAR)

Authors :
Ben Chandler
Rick Amerson
Greg Snider
Anatoli Gorchetchnikov
Zlatko Vasilkoski
Jasmin Léveillé
Ennio Mingolla
Gennady Livitz
Heather Ames
Hisham Abdalla
Dick Carter
Massimiliano Versace
M.S. Qureshi
Source :
IJCNN
Publication Year :
2011
Publisher :
IEEE, 2011.

Abstract

A neural modeling platform known as Cog ex Machina1 (Cog) developed in the context of the DARPA SyNAPSE2 program offers a computational environment that promises, in a foreseeable future, the creation of adaptive whole-brain systems subserving complex behavioral functions in virtual and robotic agents. Cog is designed to operate on low-powered, extremely storage-dense memristive hardware3 that would support massively-parallel, scalable computations. We report an adaptive robotic agent, ViGuAR4, that we developed as a neural model implemented on the Cog platform. The neuromorphic architecture of the ViGuAR brain is designed to support visually-guided navigation and learning, which in combination with the path-planning, memory-driven navigation agent - MoNETA5 - also developed at the Neuromorphics Lab at Boston University, should effectively account for a wide range of key features in rodents' navigational behavior.

Details

Database :
OpenAIRE
Journal :
The 2011 International Joint Conference on Neural Networks
Accession number :
edsair.doi...........a789ddbc3d11c6fdda28e461d5d3e3a3