Back to Search Start Over

GalleryGPT: Analyzing Paintings with Large Multimodal Models

Authors :
Bin, Yi
Shi, Wenhao
Ding, Yujuan
Hu, Zhiqiang
Wang, Zheng
Yang, Yang
Ng, See-Kiong
Shen, Heng Tao
Publication Year :
2024

Abstract

Artwork analysis is important and fundamental skill for art appreciation, which could enrich personal aesthetic sensibility and facilitate the critical thinking ability. Understanding artworks is challenging due to its subjective nature, diverse interpretations, and complex visual elements, requiring expertise in art history, cultural background, and aesthetic theory. However, limited by the data collection and model ability, previous works for automatically analyzing artworks mainly focus on classification, retrieval, and other simple tasks, which is far from the goal of AI. To facilitate the research progress, in this paper, we step further to compose comprehensive analysis inspired by the remarkable perception and generation ability of large multimodal models. Specifically, we first propose a task of composing paragraph analysis for artworks, i.e., painting in this paper, only focusing on visual characteristics to formulate more comprehensive understanding of artworks. To support the research on formal analysis, we collect a large dataset PaintingForm, with about 19k painting images and 50k analysis paragraphs. We further introduce a superior large multimodal model for painting analysis composing, dubbed GalleryGPT, which is slightly modified and fine-tuned based on LLaVA architecture leveraging our collected data. We conduct formal analysis generation and zero-shot experiments across several datasets to assess the capacity of our model. The results show remarkable performance improvements comparing with powerful baseline LMMs, demonstrating its superb ability of art analysis and generalization. \textcolor{blue}{The codes and model are available at: https://github.com/steven640pixel/GalleryGPT.<br />Comment: Accepted as Oral Presentation at ACM Multimedia 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.00491
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3664647.3681656