Research output: Contribution to book/anthology/report/proceeding › Article in proceedings › Research › peer-review
Boosting algorithms iteratively produce linear combinations of more and more base hypotheses and it has been observed experimentally that the generalization error keeps improving even after achieving zero training error. One popular explanation attributes this to improvements in margins. A common goal in a long line of research, is to maximize the smallest margin using as few base hypotheses as possible, culminating with the AdaBoostV algorithm by (Ratsch & Warmufh, 2005). The AdaBoostV algorithm was later conjectured to yield an optimal trade-off between number of hypotheses trained and the minimal margin over all training points (Nie et al., 2013). Our main contribution is a new algorithm refuting this conjecture. Furthermore, we prove a lower bound which implies that our new algorithm is optimal.
Original language | English |
---|---|
Title of host publication | 36th International Conference on Machine Learning, ICML 2019 |
Editors | Kamalika Chaudhuri, Ruslan Salakhutdinov |
Number of pages | 10 |
Volume | 97 |
Publisher | International Machine Learning Society (IMLS) |
Publication year | 2019 |
Pages | 4392-4401 |
ISBN (Electronic) | 9781510886988 |
Publication status | Published - 2019 |
Event | 36th International Conference on Machine Learning - Long Beach, United States Duration: 9 Jun 2019 → 15 Jun 2019 |
Conference | 36th International Conference on Machine Learning |
---|---|
Land | United States |
By | Long Beach |
Periode | 09/06/2019 → 15/06/2019 |
Series | Proceedings of Machine Learning Research |
---|---|
Volume | 97 |
See relations at Aarhus University Citationformats
ID: 170314054