Overview
Syllabus
Awesome song and introduction
The initial prediction
Building an XGBoost Tree for regression
Calculating Similarity Scores
Calculating Gain to evaluate different thresholds
Pruning an XGBoost Tree
Building an XGBoost Tree with regularization
Calculating output values for an XGBoost Tree
Making predictions with XGBoost
Summary of concepts and main ideas
I say "66", but I meant to say "62.48". However, either way, the conclusion is the same.
In the original XGBoost documents they use the epsilon symbol to refer to the learning rate, but in the actual implementation, this is controlled via the "eta" parameter. So, I guess to be consistent with the original documentation, I made the same mistake! :
Taught by
StatQuest with Josh Starmer