1. Repurposing existing deep networks for caption and aesthetic-guided image cropping.
- Author
-
Horanyi, Nora, Xia, Kedi, Yi, Kwang Moo, Bojja, Abhishake Kumar, Leonardis, Aleš, and Chang, Hyung Jin
- Subjects
- *
PHOTOGRAPH captions , *IMAGE processing , *AESTHETICS , *INTENTION , *ANNOTATIONS - Abstract
• The core research question of this paper is how can we find the image part described by a user, such that the output image crop will represent and preserve the caption information meanwhile result in an aesthetically pleasing output? • We have proposed a caption and aesthetics guided framework for cropping images according to the user's intention. Our framework is the first to account for the user's intention directly from the provided image caption. • We argue that the currently available image cropping and caption grounding datasets are not suitable for our description-based image cropping task. Therefore, we proposed a novel dataset with multiple ground truth bounding box annotations for each caption. • The experiments in Section 4.2 show that we can achieve better performance than the baseline methods for caption-based image cropping by re-proposing existing deep networks. We propose a novel optimization framework that crops a given image based on user description and aesthetics. Unlike existing image cropping methods, where one typically trains a deep network to regress to crop parameters or cropping actions, we propose to directly optimize for the cropping parameters by repurposing pre-trained networks on image captioning and aesthetic tasks, without any fine-tuning, thereby avoiding training a separate network. Specifically, we search for the best crop parameters that minimize a combined loss of the initial objectives of these networks. To make the optimization stable, we propose three strategies: (i) multi-scale bilinear sampling, (ii) annealing the scale of the crop region, therefore effectively reducing the parameter space, (iii) aggregation of multiple optimization results. Through various quantitative and qualitative evaluations, we show that our framework can produce crops that are well-aligned to intended user descriptions and aesthetically pleasing. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF