Back to Search Start Over

CogAgent: A Visual Language Model for GUI Agents

Authors :
Hong, Wenyi
Wang, Weihan
Lv, Qingsong
Xu, Jiazheng
Yu, Wenmeng
Ji, Junhui
Wang, Yan
Wang, Zihan
Zhang, Yuxuan
Li, Juanzi
Xu, Bin
Dong, Yuxiao
Ding, Ming
Tang, Jie
Publication Year :
2023

Abstract

People are spending an enormous amount of time on digital devices through graphical user interfaces (GUIs), e.g., computer or smartphone screens. Large language models (LLMs) such as ChatGPT can assist people in tasks like writing emails, but struggle to understand and interact with GUIs, thus limiting their potential to increase automation levels. In this paper, we introduce CogAgent, an 18-billion-parameter visual language model (VLM) specializing in GUI understanding and navigation. By utilizing both low-resolution and high-resolution image encoders, CogAgent supports input at a resolution of 1120*1120, enabling it to recognize tiny page elements and text. As a generalist visual language model, CogAgent achieves the state of the art on five text-rich and four general VQA benchmarks, including VQAv2, OK-VQA, Text-VQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. CogAgent, using only screenshots as input, outperforms LLM-based methods that consume extracted HTML text on both PC and Android GUI navigation tasks -- Mind2Web and AITW, advancing the state of the art. The model and codes are available at https://github.com/THUDM/CogVLM .<br />Comment: 27 pages, 19 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.08914
Document Type :
Working Paper