Learning Distinct Features Helps, Provably

Firas Laakom*, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

*Corresponding author for this work

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review

3 Citations (Scopus)

Abstract

We study the diversity of the features learned by a two-layer neural network trained with the least squares loss. We measure the diversity by the average -distance between the hidden-layer features and theoretically investigate how learning non-redundant distinct features affects the performance of the network. To do so, we derive novel generalization bounds depending on feature diversity based on Rademacher complexity for such networks. Our analysis proves that more distinct features at the network’s units within the hidden layer lead to better generalization. We also show how to extend our results to deeper networks and different losses.

Original languageEnglish
Title of host publicationMachine Learning and Knowledge Discovery in Databases: Research Track : European Conference, ECML PKDD 2023, Turin, Italy, September 18–22, 2023, Proceedings, Part II
EditorsDanai Koutra, Claudia Plant, Manuel Gomez Rodriguez, Elena Baralis, Francesco Bonchi
Number of pages17
Place of publicationCham
PublisherSpringer
Publication date2023
Pages206-222
ISBN (Print)978-3-031-43414-3
ISBN (Electronic)978-3-031-43415-0
DOIs
Publication statusPublished - 2023
SeriesLecture Notes in Computer Science
Volume14170
ISSN0302-9743

Keywords

  • Feature Diversity
  • Generalization Theory
  • Neural Networks

Fingerprint

Dive into the research topics of 'Learning Distinct Features Helps, Provably'. Together they form a unique fingerprint.

Cite this