Back to Search Start Over

Do Generalised Classifiers really work on Human Drawn Sketches?

Authors :
Bandyopadhyay, Hmrishav
Chowdhury, Pinaki Nath
Sain, Aneeshan
Koley, Subhadeep
Xiang, Tao
Bhunia, Ayan Kumar
Song, Yi-Zhe
Publication Year :
2024

Abstract

This paper, for the first time, marries large foundation models with human sketch understanding. We demonstrate what this brings -- a paradigm shift in terms of generalised sketch representation learning (e.g., classification). This generalisation happens on two fronts: (i) generalisation across unknown categories (i.e., open-set), and (ii) generalisation traversing abstraction levels (i.e., good and bad sketches), both being timely challenges that remain unsolved in the sketch literature. Our design is intuitive and centred around transferring the already stellar generalisation ability of CLIP to benefit generalised learning for sketches. We first "condition" the vanilla CLIP model by learning sketch-specific prompts using a novel auxiliary head of raster to vector sketch conversion. This importantly makes CLIP "sketch-aware". We then make CLIP acute to the inherently different sketch abstraction levels. This is achieved by learning a codebook of abstraction-specific prompt biases, a weighted combination of which facilitates the representation of sketches across abstraction levels -- low abstract edge-maps, medium abstract sketches in TU-Berlin, and highly abstract doodles in QuickDraw. Our framework surpasses popular sketch representation learning algorithms in both zero-shot and few-shot setups and in novel settings across different abstraction boundaries.<br />Comment: ECCV 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.03893
Document Type :
Working Paper