Layer Ensembles

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review

Abstract

Deep Ensembles, as a type of Bayesian Neural Networks, can be used to estimate uncertainty on the prediction of multiple neural networks by collecting votes from each network and computing the difference in those predictions. In this paper, we introduce a method for uncertainty estimation which considers a set of independent categorical distributions for each layer of the network, giving many more possible samples with overlapped layers than in the regular Deep Ensembles. We further introduce an optimized inference procedure that reuses common layer outputs, achieving up to 19x speed up and reducing memory usage quadratically. We also show that the method can be further improved by ranking samples, resulting in models that require less memory and time to run while achieving higher uncertainty quality than Deep Ensembles.

Original languageEnglish
Title of host publication2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP)
EditorsDanilo Comminiello, Michele Scarpiniti
Number of pages6
PublisherIEEE
Publication date2023
ISBN (Print)979-8-3503-2412-9
ISBN (Electronic)979-8-3503-2411-2
DOIs
Publication statusPublished - 2023
SeriesIEEE Workshop on Machine Learning for Signal Processing
ISSN1551-2541

Keywords

  • Bayesian neural networks
  • Deep Ensembles
  • uncertainty estimation
  • uncertainty quality

Fingerprint

Dive into the research topics of 'Layer Ensembles'. Together they form a unique fingerprint.

Cite this