Back to Search Start Over

MSPE: Multi-Scale Patch Embedding Prompts Vision Transformers to Any Resolution

Authors :
Liu, Wenzhuo
Zhu, Fei
Ma, Shijie
Liu, Cheng-Lin
Publication Year :
2024

Abstract

Although Vision Transformers (ViTs) have recently advanced computer vision tasks significantly, an important real-world problem was overlooked: adapting to variable input resolutions. Typically, images are resized to a fixed resolution, such as 224x224, for efficiency during training and inference. However, uniform input size conflicts with real-world scenarios where images naturally vary in resolution. Modifying the preset resolution of a model may severely degrade the performance. In this work, we propose to enhance the model adaptability to resolution variation by optimizing the patch embedding. The proposed method, called Multi-Scale Patch Embedding (MSPE), substitutes the standard patch embedding with multiple variable-sized patch kernels and selects the best parameters for different resolutions, eliminating the need to resize the original image. Our method does not require high-cost training or modifications to other parts, making it easy to apply to most ViT models. Experiments in image classification, segmentation, and detection tasks demonstrate the effectiveness of MSPE, yielding superior performance on low-resolution inputs and performing comparably on high-resolution inputs with existing methods.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.18240
Document Type :
Working Paper