Generalization and Overfitting
Nobelstraße 19
Stuttgart
Germany
This event is available both online and in-person
Organisers:
Topic areas
Talks at this conference
Add a talkDetails
A huge part of the recent success of highly parametrized ML models is due to their apparent ability to generalize to unseen data. This ability is seemingly in tension with mathematical results from traditional statistics (e.g. bias-variance trade-off) and statistical learning theory (e.g. PAC theorems) which rely heavily on either strong assumptions about the underlying probability distribution or restrictions on the hypothesis class. The predominant engineering epistemology claims failure of ML theory and suggests that contemporary ML models generalize well even beyond the classical overfitting regime.
This workshop aims to shed light at the generalization overfit tension. For more information and a schedule please see our website https://philo.hlrs.de/?p=415.
Registration
No
Who is attending?
No one has said they will attend yet.
Will you attend this event?