From c6307085ba9a66f7857d71558c3e5d68ee3fa846 Mon Sep 17 00:00:00 2001 From: Aston Zhang Date: Thu, 4 Oct 2018 20:04:39 +0800 Subject: [PATCH] 2pass opt intro --- chapter_optimization/index.md | 5 +- chapter_optimization/optimization-intro.md | 55 +++++++++++++++++++--- 2 files changed, 50 insertions(+), 10 deletions(-) diff --git a/chapter_optimization/index.md b/chapter_optimization/index.md index 60a78f2..6be9f4f 100644 --- a/chapter_optimization/index.md +++ b/chapter_optimization/index.md @@ -1,12 +1,11 @@ # 优化算法 -如果你一直按照本书的顺序读到这里,很可能已经使用了优化算法来训练深度学习模型。具体来说,在训练模型时,我们会使用优化算法不断迭代模型参数以最小化模型的损失函数。当迭代终止时,模型的训练随之终止。此时的模型参数就是模型通过训练所学习到的参数。 +如果你一直按照本书的顺序读到这里,那么你已经使用了优化算法来训练深度学习模型。具体来说,在训练模型时,我们会使用优化算法不断迭代模型参数以降低模型损失函数的值。当迭代终止时,模型的训练随之终止。此时的模型参数就是模型通过训练所学习到的参数。 -优化算法对于深度学习十分重要。一方面,训练一个复杂的深度学习模型可能需要数小时、数日、甚至数周时间。而优化算法的表现直接影响模型训练效率。另一方面,理解各种优化算法的原理以及其中各参数的意义将有助于我们更有针对性地调参,从而使深度学习模型表现更好。 +优化算法对于深度学习十分重要。一方面,训练一个复杂的深度学习模型可能需要数小时、数日、甚至数周时间。而优化算法的表现直接影响模型的训练效率。另一方面,理解各种优化算法的原理以及其中超参数的意义将有助于我们更有针对性地调参,从而使深度学习模型表现更好。 本章将详细介绍深度学习中的常用优化算法。 - ```eval_rst .. toctree:: diff --git a/chapter_optimization/optimization-intro.md b/chapter_optimization/optimization-intro.md index ca8d063..11c9200 100644 --- a/chapter_optimization/optimization-intro.md +++ b/chapter_optimization/optimization-intro.md @@ -1,8 +1,10 @@ # 优化与深度学习 -本节将讨论优化与深度学习的关系以及优化在深度学习中的挑战。在一个深度学习问题中,通常我们会预先定义一个损失函数。有了损失函数以后,我们就可以使用优化算法试图使其最小化。在优化中,这样的损失函数通常被称作优化问题的目标函数(objective function)。依据惯例,优化算法通常只考虑最小化目标函数。其实,任何最大化问题都可以很容易地转化为最小化问题:我们只需把目标函数前面的正号或负号取相反。 +本节将讨论优化与深度学习的关系以及优化在深度学习中的挑战。在一个深度学习问题中,通常我们会预先定义一个损失函数。有了损失函数以后,我们就可以使用优化算法试图将其最小化。在优化中,这样的损失函数通常被称作优化问题的目标函数(objective function)。依据惯例,优化算法通常只考虑最小化目标函数。其实,任何最大化问题都可以很容易地转化为最小化问题:我们只需令目标函数的相反数为新的目标函数。 -虽然优化为深度学习提供了最小化损失函数的方法,但本质上,这两者之间的目标是有区别的。 +## 优化与深度学习的关系 + +虽然优化为深度学习提供了最小化损失函数的方法,但本质上,优化与深度学习之间的目标是有区别的。 在[“模型选择、欠拟合和过拟合”](../chapter_deep-learning-basics/underfit-overfit.md)一节中,我们区分了训练误差和泛化误差。 由于优化算法的目标函数通常是一个基于训练数据集的损失函数,优化的目标在于降低训练误差。 而深度学习的目标在于降低泛化误差。 @@ -13,7 +15,7 @@ ## 优化在深度学习中的挑战 -绝大多数深度学习中的目标函数都很复杂。因此,很多优化问题并不存在解析解,而需要使用基于数值方法的优化算法找到近似解。这类优化算法一般通过不断迭代更新解的数值来找到近似解。我们讨论的优化算法都是这类基于数值方法的算法。 +我们在[“线性回归”](../chapter_deep-learning-basics/linear-regression.md)一节中对优化问题的解析解和数值解做了区分。深度学习中绝大多数的目标函数都很复杂。因此,很多优化问题并不存在解析解,而需要使用基于数值方法的优化算法找到近似解,即数值解。我们讨论的优化算法都是这类基于数值方法的算法。为了求得最小化目标函数的数值解,我们将通过优化算法有限次迭代模型参数来尽可能降低损失函数的值。 优化在深度学习中有很多挑战。以下描述了其中的两个挑战:局部最小值和鞍点。为了更好地描述问题,我们先导入本节中实验需要的包或模块。 @@ -52,6 +54,19 @@ gb.plt.xlabel('x') gb.plt.ylabel('f(x)'); ``` +```{.json .output n=2} +[ + { + "data": { + "image/svg+xml": "\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n", + "text/plain": "
" + }, + "metadata": {}, + "output_type": "display_data" + } +] +``` + 深度学习模型的目标函数可能有若干局部最优值。当一个优化问题的数值解在局部最优解附近时,由于目标函数有关解的梯度接近或变成零,最终迭代求得的数值解可能只令目标函数局部最小化而非全局最小化。 ### 鞍点 @@ -71,6 +86,19 @@ gb.plt.xlabel('x') gb.plt.ylabel('f(x)'); ``` +```{.json .output n=3} +[ + { + "data": { + "image/svg+xml": "\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n", + "text/plain": "
" + }, + "metadata": {}, + "output_type": "display_data" + } +] +``` + 再举个定义在二维空间的函数的例子,例如 $$f(x, y) = x^2 - y^2.$$ @@ -92,9 +120,22 @@ gb.plt.xlabel('x') gb.plt.ylabel('y'); ``` +```{.json .output n=4} +[ + { + "data": { + "image/svg+xml": "\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n", + "text/plain": "
" + }, + "metadata": {}, + "output_type": "display_data" + } +] +``` + 在上图的鞍点位置,目标函数在$x$轴方向上是局部最小值,而在$y$轴方向上是局部最大值。 -假设一个函数的输入为$k$维向量,输出为标量,那么它的黑塞矩阵(Hessian matrix)有$k$个特征值。需要注意的是,该函数在梯度为零的位置上可能是局部最小值、局部最大值或者鞍点: +假设一个函数的输入为$k$维向量,输出为标量,那么它的黑塞矩阵(Hessian matrix)有$k$个特征值。该函数在梯度为零的位置上可能是局部最小值、局部最大值或者鞍点: * 当函数的黑塞矩阵在梯度为零的位置上的特征值全为正时,该函数得到局部最小值。 * 当函数的黑塞矩阵在梯度为零的位置上的特征值全为负时,该函数得到局部最大值。 @@ -102,7 +143,7 @@ gb.plt.ylabel('y'); 随机矩阵理论告诉我们,对于一个大的高斯随机矩阵来说,任一特征值是正或者是负的概率都是0.5 [1]。那么,以上第一种情况的概率为 $0.5^k$。由于深度学习模型参数通常都是高维的($k$很大),目标函数的鞍点通常比局部最小值更常见。 -深度学习中,虽然找到目标函数的全局最优解很难,但这并非必要。我们将在接下来的章节中逐一介绍深度学习中常用的优化算法,它们在很多实际问题中都训练出了十分有效的深度学习模型。 +深度学习中,虽然找到目标函数的全局最优解很难,但这并非必要。我们将在本章接下来的小节中逐一介绍深度学习中常用的优化算法,它们在很多实际问题中都训练出了十分有效的深度学习模型。 ## 小结 @@ -113,14 +154,14 @@ gb.plt.ylabel('y'); ## 练习 -* 你还能想到哪些深度学习中的优化问题的挑战? +* 对于深度学习中的优化问题,你还能想到哪些其他的挑战? ## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1876) - ![](../img/qr_optimization-intro.svg) + ## 参考文献 [1] Wigner, E. P. (1958). On the distribution of the roots of certain symmetric matrices. Annals of Mathematics, 325-327. -- GitLab