Sorry, I don't understand your search. ×
Back to Search Start Over

Adaptive Transformers in RL

Authors :
Kumar, Shakti
Parker, Jerrod
Naderian, Panteha
Publication Year :
2020

Abstract

Recent developments in Transformers have opened new interesting areas of research in partially observable reinforcement learning tasks. Results from late 2019 showed that Transformers are able to outperform LSTMs on both memory intense and reactive tasks. In this work we first partially replicate the results shown in Stabilizing Transformers in RL on both reactive and memory based environments. We then show performance improvement coupled with reduced computation when adding adaptive attention span to this Stable Transformer on a challenging DMLab30 environment. The code for all our experiments and models is available at https://github.com/jerrodparker20/adaptive-transformers-in-rl.<br />Comment: 10 pages with 9 figures and 4 tables. Main text is 6 pages, appendix is 4 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2004.03761
Document Type :
Working Paper