Back to Search Start Over

ASSET: Autoregressive Semantic Scene Editing with Transformers at High Resolutions

Authors :
Liu, Difan
Shetty, Sandesh
Hinz, Tobias
Fisher, Matthew
Zhang, Richard
Park, Taesung
Kalogerakis, Evangelos
Publication Year :
2022

Abstract

We present ASSET, a neural architecture for automatically modifying an input high-resolution image according to a user's edits on its semantic segmentation map. Our architecture is based on a transformer with a novel attention mechanism. Our key idea is to sparsify the transformer's attention matrix at high resolutions, guided by dense attention extracted at lower image resolutions. While previous attention mechanisms are computationally too expensive for handling high-resolution images or are overly constrained within specific image regions hampering long-range interactions, our novel attention mechanism is both computationally efficient and effective. Our sparsified attention mechanism is able to capture long-range interactions and context, leading to synthesizing interesting phenomena in scenes, such as reflections of landscapes onto water or flora consistent with the rest of the landscape, that were not possible to generate reliably with previous convnets and transformer approaches. We present qualitative and quantitative results, along with user studies, demonstrating the effectiveness of our method.<br />Comment: SIGGRAPH 2022 - Journal Track

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.12231
Document Type :
Working Paper