From c0ef73e6e3663750a28303f194bd7e826aa5163b Mon Sep 17 00:00:00 2001 From: wizardforcel <562826179@qq.com> Date: Fri, 14 Aug 2020 18:51:05 +0800 Subject: [PATCH] 2020-08-14 18:51:05 --- docs/tf-1x-dl-cookbook/02.md | 2 +- docs/tf-1x-dl-cookbook/12.md | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/tf-1x-dl-cookbook/02.md b/docs/tf-1x-dl-cookbook/02.md index a5ab7e4e..5b88afe3 100644 --- a/docs/tf-1x-dl-cookbook/02.md +++ b/docs/tf-1x-dl-cookbook/02.md @@ -216,7 +216,7 @@ optimizer = tf.train.AdadeltaOptimizer(learning_rate=0.8, rho=0.95).minimize(los optimizer = tf.train.RMSpropOptimizer(learning_rate=0.01, decay=0.8, momentum=0.1).minimize(loss) ``` -There are some fine differences between Adadelta and RMSprop. To find out more about them, you can refer to [http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) and [https://arxiv.org/pdf/1212.5701.pdf](https://arxiv.org/pdf/1212.5701.pdf). +Adadelta 和 RMSprop 之间有一些细微的差异。 要了解有关它们的更多信息,可以参考[这里](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)和[这里](https://arxiv.org/pdf/1212.5701.pdf)。 9. TensorFlow 支持的另一种流行的优化器是 Adam 优化器。 该方法使用第一个和第二个梯度矩的估计来计算不同系数的个体自适应学习率: diff --git a/docs/tf-1x-dl-cookbook/12.md b/docs/tf-1x-dl-cookbook/12.md index b8e2ee50..70e19f79 100644 --- a/docs/tf-1x-dl-cookbook/12.md +++ b/docs/tf-1x-dl-cookbook/12.md @@ -26,7 +26,7 @@ ![](img/b2700755-61ba-44f4-82aa-a60e2dd7f16c.png) -An example of distributed gradient descent with a parameter server as taken from [https://research.google.com/archive/large_deep_networks_nips2012.html](https://research.google.com/archive/large_deep_networks_nips2012.html) +[来自参数服务器的分布式梯度下降示例](https://research.google.com/archive/large_deep_networks_nips2012.html) 您应该阅读的另一份文档是白皮书 [TensorFlow:异构分布式系统上的大规模机器学习](http://download.tensorflow.org/paper/whitepaper2015.pdf)(MartínAbadi 等人,2015 年 11 月) @@ -34,19 +34,19 @@ An example of distributed gradient descent with a parameter server as taken from | ![](img/b5cffc2b-9734-4179-abfb-71abe9fa4fa5.png) | ![](img/4eaf774e-bf1e-423f-bd04-bbbe56dfe2d4.png) | -An example of TensorFlow graph as taken from [http://download.tensorflow.org/paper/whitepaper2015.pdf](http://download.tensorflow.org/paper/whitepaper2015.pdf) +[TensorFlow 图的示例](http://download.tensorflow.org/paper/whitepaper2015.pdf) 通过进行本地计算并在需要时透明地将远程通信节点添加到图形,可以在多个节点之间划分图形。 下图中对此作了很好的解释,该图仍取自前面提到的论文: ![](img/bceaef6d-d560-4a58-9f7e-01797bfa3350.png) -An example of distributed TensorFlow graph computation as taken from [http://download.tensorflow.org/paper/whitepaper2015.pdf](http://download.tensorflow.org/paper/whitepaper2015.pdf) +[摘自以下内容的分布式 TensorFlow 图计算示例](http://download.tensorflow.org/paper/whitepaper2015.pdf) 可以通过集中方式(下图的左侧)或分布式方式(右侧)来计算梯度下降和所有主要的优化器算法。 后者涉及一个主进程,该主进程与调配 GPU 和 CPU 的多个工作人员进行对话: ![](img/96de5e4c-3af3-4ba3-9afa-6d1ad513b03c.png) -An example of single machine and distributed system structure as taken from An example of distributed TensorFlow graph computation as taken from [http://download.tensorflow.org/paper/whitepaper2015.pdf](http://download.tensorflow.org/paper/whitepaper2015.pdf) +[摘自以下内容的单机和分布式系统结构示例和分布式 TensorFlow 图计算示例](http://download.tensorflow.org/paper/whitepaper2015.pdf) 分布式计算既可以是同步的(所有工作程序都在同时更新分片数据上的梯度),也可以是异步的(更新没有同时发生)。 后者通常可以实现更高的可伸缩性,并且在收敛到最佳解决方案方面,较大的图形计算仍然可以很好地工作。 同样,这些图片均来自 TensorFlow 白皮书,如果您想了解更多信息,我强烈建议有兴趣的读者阅读本文: -- GitLab