Frage | Antworten |
Overfitting | > Explains all training points very well or even exactly > Tends to be very complicated and models the noise as well > Predictions for unseen data points are poor (large test error) > Low approximation error, high estimation error |
Underfitting | > Too simplistic > Estimated functions are stable with respect to noise > Large approximation error, low estimation error |
Supervised learning | > Input-output pairs (Xi, Yi) given as training data > Goal: "Learn" the function f: Xi -> Yi > A teacher tells us what the true outcome should be on the given samples > Tasks: Classification, Regression |
Unsupervised learning | > Given: Input values Xi, but no output values > Learner is supposed to learn the "structure" of the inputs > Tasks: Clustering, Outlier detection |
Semi-supervised learning | > Given: Many input values X1, ..., Xu (unlabeled points) and some input-output pairs (Xi, Yi) (labeled points) > Goal (same as supervised learning): Predict function f: Xi -> Yi > Unsupervised part: Exploit extra knowledge gained from unlabeled points |
Möchten Sie mit GoConqr kostenlos Ihre eigenen Karteikarten erstellen? Mehr erfahren.