Single-layer vision transformers for more accurate early exits with less overhead

Arian Bakhtiarnia*, Qi Zhang*, Alexandros Iosifidis*

*Corresponding author for this work

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

28 Downloads (Pure)

Abstract

Deploying deep learning models in time-critical applications with limited computational resources, for instance in edge computing systems and IoT networks, is a challenging task that often relies on dynamic inference methods such as early exiting. In this paper, we introduce a novel architecture for early exiting based on the vision transformer architecture, as well as a fine-tuning strategy that significantly increase the accuracy of early exit branches compared to conventional approaches while introducing less overhead. Through extensive experiments on image and audio classification as well as audiovisual crowd counting, we show that our method works for both classification and regression problems, and in both single- and multi-modal settings. Additionally, we introduce a novel method for integrating audio and visual modalities within early exits in audiovisual data analysis, that can lead to a more fine-grained dynamic inference.

Original languageEnglish
JournalNeural Networks
Volume153
Pages (from-to)461-473
Number of pages13
ISSN0893-6080
DOIs
Publication statusPublished - Sept 2022

Keywords

  • Dynamic inference
  • Early exiting
  • Multi-exit architecture
  • Multimodal deep learning
  • Vision transformer

Fingerprint

Dive into the research topics of 'Single-layer vision transformers for more accurate early exits with less overhead'. Together they form a unique fingerprint.

Cite this