Unimore logo AImageLab

Investigating Bidimensional Downsampling in Vision Transformer Models

Abstract: Vision Transformers (ViT) and other Transformer-based architectures for image classification have achieved promising performances in the last two years. However, ViT-based models require large datasets, memory, and computational power to obtain state-of-the-art results compared to more traditional architectures. The generic ViT model, indeed, maintains a full-length patch sequence during inference, which is redundant and lacks hierarchical representation. With the goal of increasing the efficiency of Transformer-based models, we explore the application of a 2D max-pooling operator on the outputs of Transformer encoders. We conduct extensive experiments on the CIFAR-100 dataset and the large ImageNet dataset and consider both accuracy and efficiency metrics, with the final goal of reducing the token sequence length without affecting the classification performance. Experimental results show that bidimensional downsampling can outperform previous classification approaches while requiring relatively limited computation resources.


Citation:

Bruno, Paolo; Amoroso, Roberto; Cornia, Marcella; Cascianelli, Silvia; Baraldi, Lorenzo; Cucchiara, Rita "Investigating Bidimensional Downsampling in Vision Transformer Models" Proceedings of the 21st International Conference on Image Analysis and Processing, vol. 13232, Lecce, Italy, pp. 287 -299 , 23 - 27 May 2022, 2022 DOI: 10.1007/978-3-031-06430-2_24

 not available

Related projects: