WLD-Reg: A Data-Dependent Within-Layer Diversity Regularizer

Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review


Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. At each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional’between-layer’ feedback with additional’within-layer’ feedback to encourage the diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer’s overall diversity. We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks. The code is publically available at https://github.com/firasl/AAAI-23WLD-Reg.

Original languageEnglish
Title of host publicationAAAI-23 Technical Tracks 7
EditorsBrian Williams, Yiling Chen, Jennifer Neville
Number of pages9
PublisherAAAI Press
Publication date27 Jun 2023
Article number190493
ISBN (Electronic)9781577358800
Publication statusPublished - 27 Jun 2023
Event37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States
Duration: 7 Feb 202314 Feb 2023


Conference37th AAAI Conference on Artificial Intelligence, AAAI 2023
Country/TerritoryUnited States
SponsorAssociation for the Advancement of Artificial Intelligence
SeriesProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023


Dive into the research topics of 'WLD-Reg: A Data-Dependent Within-Layer Diversity Regularizer'. Together they form a unique fingerprint.

Cite this