diff --git a/Machine Learning/3.2 GBDT/README.md b/Machine Learning/3.2 GBDT/README.md index f903b4d8c2a6b5fbe4d313ae98b7167565959c06..2493910eb447740f56953905b91f95c13c7e7461 100644 --- a/Machine Learning/3.2 GBDT/README.md +++ b/Machine Learning/3.2 GBDT/README.md @@ -24,11 +24,11 @@ GBDT的原理很简单,就是所有弱分类器的结果相加等于预测值 回归任务下,GBDT 在每一轮的迭代时对每个样本都会有一个预测值,此时的损失函数为均方差损失函数, -$$l(y_i,y^i)=\frac{1}{2}(y_i-y^i)^2$$ +![](https://julyedu-img.oss-cn-beijing.aliyuncs.com/quesbase64155214962034944638.gif) 那此时的负梯度是这样计算的 -$$-[\frac{\partial l(y_i,y^i)}{\partial y^i}]=(y_i-y^i)$$ +![](https://julyedu-img.oss-cn-beijing.aliyuncs.com/quesbase64155214962416670973.gif) 所以,当损失函数选用均方损失函数是时,每一次拟合的值就是(真实值 - 当前模型预测的值),即残差。此时的变量是![](https://julyedu-img.oss-cn-beijing.aliyuncs.com/quesbase64155214963633267938.gif),即“当前预测模型的值”,也就是对它求负梯度。