ECINN: Efficient Counterfactuals from Invertible Neural Networks

Frederik Hvilshøj, Alexandros Iosifidis, Ira Assent

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearch


Counterfactual examples identify how inputs can be altered to change the predicted class of a classifier, thus opening up the black-box nature of, e.g., deep neural networks. We propose a method, ECINN, that utilizes the generative capacities of invertible neural networks for image classification to generate counterfactual examples efficiently. In contrast to competing methods that sometimes need a thousand evaluations or more of the classifier, ECINN has a closed-form expression and generates a counterfactual in the time of only two evaluations. Arguably, the main challenge of generating counterfactual examples is to alter only input features that affect the predicted outcome, i.e., class-dependent features. Our experiments demonstrate how ECINN alters class-dependent image regions to change the perceptual and predicted class, producing more realistically looking counterfactuals three orders of magnitude faster than competing methods.
Original languageEnglish
Title of host publication32nd British Machine Vision Conference 2021, BMVC 2021, Virtual
Publication date22 Nov 2021
Publication statusPublished - 22 Nov 2021
EventBritish Machine Vision Conference : Online 22nd - 25th November 2021 -
Duration: 22 Nov 202125 Nov 2021
Conference number: 32


ConferenceBritish Machine Vision Conference : Online 22nd - 25th November 2021
Internet address


Dive into the research topics of 'ECINN: Efficient Counterfactuals from Invertible Neural Networks'. Together they form a unique fingerprint.

Cite this