Back to Search Start Over

Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld

Authors :
Khajehnejad, Moein
Habibollahi, Forough
Paul, Aswin
Razi, Adeel
Kagan, Brett J.
Publication Year :
2024

Abstract

How do biological systems and machine learning algorithms compare in the number of samples required to show significant improvements in completing a task? We compared the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in a simplified simulation of the game `Pong'. Using DishBrain, a system that embodies in vitro neural networks with in silico computation using a high-density multi-electrode array, we contrasted the learning rate and the performance of these biological systems against time-matched learning from three state-of-the-art deep RL algorithms (i.e., DQN, A2C, and PPO) in the same game environment. This allowed a meaningful comparison between biological neural systems and deep RL. We find that when samples are limited to a real-world time course, even these very simple biological cultures outperformed deep RL algorithms across various game performance characteristics, implying a higher sample efficiency. Ultimately, even when tested across multiple types of information input to assess the impact of higher dimensional data input, biological neurons showcased faster learning than all deep reinforcement learning agents.<br />Comment: 13 Pages, 6 Figures - 38 Supplementary Pages, 6 Supplementary Figures, 4 Supplementary Tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.16946
Document Type :
Working Paper