TY - JOUR
T1 - Single-layer vision transformers for more accurate early exits with less overhead
AU - Bakhtiarnia, Arian
AU - Zhang, Qi
AU - Iosifidis, Alexandros
PY - 2022/9
Y1 - 2022/9
N2 - Deploying deep learning models in time-critical applications with limited computational resources, for instance in edge computing systems and IoT networks, is a challenging task that often relies on dynamic inference methods such as early exiting. In this paper, we introduce a novel architecture for early exiting based on the vision transformer architecture, as well as a fine-tuning strategy that significantly increase the accuracy of early exit branches compared to conventional approaches while introducing less overhead. Through extensive experiments on image and audio classification as well as audiovisual crowd counting, we show that our method works for both classification and regression problems, and in both single- and multi-modal settings. Additionally, we introduce a novel method for integrating audio and visual modalities within early exits in audiovisual data analysis, that can lead to a more fine-grained dynamic inference.
AB - Deploying deep learning models in time-critical applications with limited computational resources, for instance in edge computing systems and IoT networks, is a challenging task that often relies on dynamic inference methods such as early exiting. In this paper, we introduce a novel architecture for early exiting based on the vision transformer architecture, as well as a fine-tuning strategy that significantly increase the accuracy of early exit branches compared to conventional approaches while introducing less overhead. Through extensive experiments on image and audio classification as well as audiovisual crowd counting, we show that our method works for both classification and regression problems, and in both single- and multi-modal settings. Additionally, we introduce a novel method for integrating audio and visual modalities within early exits in audiovisual data analysis, that can lead to a more fine-grained dynamic inference.
KW - Dynamic inference
KW - Early exiting
KW - Multi-exit architecture
KW - Multimodal deep learning
KW - Vision transformer
UR - https://www.scopus.com/pages/publications/85133931261
U2 - 10.1016/j.neunet.2022.06.038
DO - 10.1016/j.neunet.2022.06.038
M3 - Journal article
C2 - 35816859
SN - 0893-6080
VL - 153
SP - 461
EP - 473
JO - Neural Networks
JF - Neural Networks
ER -