Back to Search Start Over

Enhancing Analogical Reasoning in the Abstraction and Reasoning Corpus via Model-Based RL

Authors :
Lee, Jihwan
Sim, Woochang
Kim, Sejin
Kim, Sundong
Publication Year :
2024

Abstract

This paper demonstrates that model-based reinforcement learning (model-based RL) is a suitable approach for the task of analogical reasoning. We hypothesize that model-based RL can solve analogical reasoning tasks more efficiently through the creation of internal models. To test this, we compared DreamerV3, a model-based RL method, with Proximal Policy Optimization, a model-free RL method, on the Abstraction and Reasoning Corpus (ARC) tasks. Our results indicate that model-based RL not only outperforms model-free RL in learning and generalizing from single tasks but also shows significant advantages in reasoning across similar tasks.<br />Comment: Accepted to IJCAI 2024 IARML Workshop

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.14855
Document Type :
Working Paper