Aarhus University Seal

Optimal Minimal Margin Maximization with Boosting

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review

Boosting algorithms iteratively produce linear combinations of more and more base hypotheses and it has been observed experimentally that the generalization error keeps improving even after achieving zero training error. One popular explanation attributes this to improvements in margins. A common goal in a long line of research, is to maximize the smallest margin using as few base hypotheses as possible, culminating with the AdaBoostV algorithm by (Ratsch & Warmufh, 2005). The AdaBoostV algorithm was later conjectured to yield an optimal trade-off between number of hypotheses trained and the minimal margin over all training points (Nie et al., 2013). Our main contribution is a new algorithm refuting this conjecture. Furthermore, we prove a lower bound which implies that our new algorithm is optimal.

Original languageEnglish
Title of host publication36th International Conference on Machine Learning, ICML 2019
EditorsKamalika Chaudhuri, Ruslan Salakhutdinov
Number of pages10
Volume97
PublisherInternational Machine Learning Society (IMLS)
Publication year2019
Pages4392-4401
ISBN (Electronic)9781510886988
Publication statusPublished - 2019
Event36th International Conference on Machine Learning - Long Beach, United States
Duration: 9 Jun 201915 Jun 2019

Conference

Conference36th International Conference on Machine Learning
LandUnited States
ByLong Beach
Periode09/06/201915/06/2019
SeriesProceedings of Machine Learning Research
Volume97

See relations at Aarhus University Citationformats

ID: 170314054