diff --git a/docs/dl-nlp/caption-generation-inject-merge-architectures-encoder-decoder-model.md b/docs/dl-nlp/caption-generation-inject-merge-architectures-encoder-decoder-model.md index 06d79016d2cd7ac314a3974299ab364833bcdf8c..2fd19499a507d6e75524a616af491b4e6b2933ea 100644 --- a/docs/dl-nlp/caption-generation-inject-merge-architectures-encoder-decoder-model.md +++ b/docs/dl-nlp/caption-generation-inject-merge-architectures-encoder-decoder-model.md @@ -40,9 +40,9 @@ 有关编解码器循环神经网络架构的更多信息,请参阅帖子: -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) -通常,卷积神经网络用于编码图像,并且循环神经网络(例如长短期存储器网络)用于编码到目前为止生成的文本序列,和/或生成序列中的下一个单词。 。 +通常,卷积神经网络用于编码图像,并且循环神经网络(例如长短期记忆网络)用于编码到目前为止生成的文本序列,和/或生成序列中的下一个单词。 。 对于字幕生成问题,有很多方法可以实现这种架构。 @@ -134,7 +134,7 @@ 如果您希望深入了解,本节将提供有关该主题的更多资源。 * [Marc Tanti 的博客](https://geekyisawesome.blogspot.com.au/) -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) * [将图像放在图像标题生成器](https://arxiv.org/abs/1703.09137)中的位置,2017。 * [循环神经网络(RNN)在图像标题生成器中的作用是什么?](https://arxiv.org/abs/1708.02043) ,2017。 diff --git a/docs/dl-nlp/configure-encoder-decoder-model-neural-machine-translation.md b/docs/dl-nlp/configure-encoder-decoder-model-neural-machine-translation.md index f60757b662729d5013d62f39a0205736a683e168..24cf4fd048efabf84755627fa67786beaee86999 100644 --- a/docs/dl-nlp/configure-encoder-decoder-model-neural-machine-translation.md +++ b/docs/dl-nlp/configure-encoder-decoder-model-neural-machine-translation.md @@ -37,7 +37,7 @@ 有关编解码器架构和注意机制的更多背景信息,请参阅帖子: -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) * [长期短期记忆循环神经网络](https://machinelearningmastery.com/attention-long-short-term-memory-recurrent-neural-networks/)的注意事项 ## 基线模型 diff --git a/docs/dl-nlp/develop-neural-machine-translation-system-keras.md b/docs/dl-nlp/develop-neural-machine-translation-system-keras.md index 8721bcaf6297e9ac7aae728593a9c0ea1a42f16d..dfefb9d3bdde77c1eda24de9eab3d9f1cfa3d860 100644 --- a/docs/dl-nlp/develop-neural-machine-translation-system-keras.md +++ b/docs/dl-nlp/develop-neural-machine-translation-system-keras.md @@ -927,7 +927,7 @@ BLEU-4: 0.076238 * [制表符分隔的双语句子对](http://www.manythings.org/anki/) * [德语 - 英语 deu-eng.zip](http://www.manythings.org/anki/deu-eng.zip) -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) ## 摘要 diff --git a/docs/dl-nlp/encoder-decoder-models-text-summarization-keras.md b/docs/dl-nlp/encoder-decoder-models-text-summarization-keras.md index 565d8e13072c4930f14482800cd8066b6a12de6b..0a018474dac22d1b5f8a0c07d23390f519e23f91 100644 --- a/docs/dl-nlp/encoder-decoder-models-text-summarization-keras.md +++ b/docs/dl-nlp/encoder-decoder-models-text-summarization-keras.md @@ -44,7 +44,7 @@ 有关编解码器架构的更多信息,请参阅帖子: -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) 编码器和解码器子模型都是联合训练的,意思是同时进行。 @@ -300,7 +300,7 @@ model.compile(loss='categorical_crossentropy', optimizer='adam') ### 有关 -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) * [长期短期记忆循环神经网络](https://machinelearningmastery.com/attention-long-short-term-memory-recurrent-neural-networks/)的注意事项 ## 摘要 diff --git a/docs/dl-nlp/encoder-decoder-recurrent-neural-network-models-neural-machine-translation.md b/docs/dl-nlp/encoder-decoder-recurrent-neural-network-models-neural-machine-translation.md index 20369fbb192d22728394b3b3dce84657ae7353d3..2515878e99af1ab3a7b0819b7c15fba3d50e6b97 100644 --- a/docs/dl-nlp/encoder-decoder-recurrent-neural-network-models-neural-machine-translation.md +++ b/docs/dl-nlp/encoder-decoder-recurrent-neural-network-models-neural-machine-translation.md @@ -40,7 +40,7 @@ 有关架构的更多信息,请参阅帖子: -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) ## Sutskever NMT 模型 diff --git a/docs/dl-nlp/introduction-neural-machine-translation.md b/docs/dl-nlp/introduction-neural-machine-translation.md index acbfa0ae256e3a7887787b2a45810e2b6c8fd135..a73339bf2b82609b8d8f71c9e98c481c6673795b 100644 --- a/docs/dl-nlp/introduction-neural-machine-translation.md +++ b/docs/dl-nlp/introduction-neural-machine-translation.md @@ -113,7 +113,7 @@ 有关编解码器循环神经网络架构的更多信息,请参阅帖子: -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) ### 带注意的编码器解码器 diff --git a/docs/dl-ts/how-to-get-started-with-deep-learning-for-time-series-forecasting-7-day-mini-course.md b/docs/dl-ts/how-to-get-started-with-deep-learning-for-time-series-forecasting-7-day-mini-course.md index 1ea45aca8e3ce3a630469e83857d5cc10a3a904a..cb6ae5068ea900369d35f7f82172b90632cf52a1 100644 --- a/docs/dl-ts/how-to-get-started-with-deep-learning-for-time-series-forecasting-7-day-mini-course.md +++ b/docs/dl-ts/how-to-get-started-with-deep-learning-for-time-series-forecasting-7-day-mini-course.md @@ -452,7 +452,7 @@ print(yhat) ### 更多信息 -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) * [多步时间序列预测的 4 种策略](https://machinelearningmastery.com/multi-step-time-series-forecasting/) * [Python 中长期短期记忆网络的多步时间序列预测](https://machinelearningmastery.com/multi-step-time-series-forecasting-long-short-term-memory-networks-python/) diff --git a/docs/dl-ts/how-to-load-and-explore-household-electricity-usage-data.md b/docs/dl-ts/how-to-load-and-explore-household-electricity-usage-data.md index bfb40968fdd5b715562a831180f9c21ee5b30bc9..0fca98bdfbc4b417944c9ee51f729e62a6d14972 100644 --- a/docs/dl-ts/how-to-load-and-explore-household-electricity-usage-data.md +++ b/docs/dl-ts/how-to-load-and-explore-household-electricity-usage-data.md @@ -616,7 +616,7 @@ pyplot.show() 通常,神经网络在自回归类型问题上未被证明非常有效。 -然而,诸如卷积神经网络的技术能够从原始数据(包括一维信号数据)自动学习复杂特征。并且诸如长短期存储器网络之类的循环神经网络能够直接学习输入数据的多个并行序列。 +然而,诸如卷积神经网络的技术能够从原始数据(包括一维信号数据)自动学习复杂特征。并且诸如长短期记忆网络之类的循环神经网络能够直接学习输入数据的多个并行序列。 此外,这些方法的组合,例如 CNN LSTM 和 ConvLSTM,已经证明在时间序列分类任务上是有效的。 diff --git a/docs/lstm/SUMMARY.md b/docs/lstm/SUMMARY.md index dc7885886ec5c74982abbdcbc26fe2f6e87c341c..2e2e5c3acd0cde5653dbd70ca0e0833d9438f918 100644 --- a/docs/lstm/SUMMARY.md +++ b/docs/lstm/SUMMARY.md @@ -8,7 +8,7 @@ + [如何开发Keras序列到序列预测的编解码器模型](develop-encoder-decoder-model-sequence-sequence-prediction-keras.md) + [如何诊断LSTM模型的过拟合和欠拟合](diagnose-overfitting-underfitting-lstm-models.md) + [如何开发一种编解码器模型,注重Keras中的序列到序列预测](encoder-decoder-attention-sequence-to-sequence-prediction-keras.md) -+ [编解码器长短期存储器网络](encoder-decoder-long-short-term-memory-networks.md) ++ [编解码器长短期记忆网络](encoder-decoder-long-short-term-memory-networks.md) + [神经网络中爆炸梯度的温和介绍](exploding-gradients-in-neural-networks.md) + [对时间反向传播的温和介绍](gentle-introduction-backpropagation-time.md) + [生成长短期记忆网络的温和介绍](gentle-introduction-generative-long-short-term-memory-networks.md) diff --git a/docs/lstm/crash-course-recurrent-neural-networks-deep-learning.md b/docs/lstm/crash-course-recurrent-neural-networks-deep-learning.md index 506a9a778c1363ba51a0df4884f0f0997ce96d02..f672b6798d44447acb3e458a4cbb8048313464f9 100644 --- a/docs/lstm/crash-course-recurrent-neural-networks-deep-learning.md +++ b/docs/lstm/crash-course-recurrent-neural-networks-deep-learning.md @@ -77,7 +77,7 @@ 通过使用整流器传递函数,这种问题在深层多层感知器网络中得到了缓解,甚至更加奇特但现在不那么流行的使用无监督预层训练的方法。 -在循环神经网络架构中,使用称为长短期存储器网络的新型架构可以缓解这个问题,该架构允许训练深度复现网络。 +在循环神经网络架构中,使用称为长短期记忆网络的新型架构可以缓解这个问题,该架构允许训练深度复现网络。 ## 长期短期记忆网络 diff --git a/docs/lstm/develop-encoder-decoder-model-sequence-sequence-prediction-keras.md b/docs/lstm/develop-encoder-decoder-model-sequence-sequence-prediction-keras.md index a0988cc8bd856809f52bfafd6833a29c2f62ce9c..0dd5ca3b4c46b0b0fe30d098d07cbb2c4a99b562 100644 --- a/docs/lstm/develop-encoder-decoder-model-sequence-sequence-prediction-keras.md +++ b/docs/lstm/develop-encoder-decoder-model-sequence-sequence-prediction-keras.md @@ -1,4 +1,4 @@ -# 如何开发Keras序列到序列预测的编解码器模型 +# 如何在 Keras 中开发用于序列到序列预测的编解码器模型 > 原文: [https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/](https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/) diff --git a/docs/lstm/encoder-decoder-attention-sequence-to-sequence-prediction-keras.md b/docs/lstm/encoder-decoder-attention-sequence-to-sequence-prediction-keras.md index fd6c14b7527e95195ecba667f3970980d11c1536..9d89b89773922aeadd38914eb7cb5a2d7b109563 100644 --- a/docs/lstm/encoder-decoder-attention-sequence-to-sequence-prediction-keras.md +++ b/docs/lstm/encoder-decoder-attention-sequence-to-sequence-prediction-keras.md @@ -1,4 +1,4 @@ -# 如何开发一种编解码器模型,注重Keras中的序列到序列预测 +# 如何在Keras中开发带有注意力的编解码器模型 > 原文: [https://machinelearningmastery.com/encoder-decoder-attention-sequence-to-sequence-prediction-keras/](https://machinelearningmastery.com/encoder-decoder-attention-sequence-to-sequence-prediction-keras/) @@ -258,7 +258,7 @@ n_timesteps_out = 2 有关如何在Keras中定义编解码器架构的更多详细信息,请参阅帖子: -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) 我们将使用相同数量的单位配置编码器和解码器,在本例中为150.我们将使用梯度下降的有效Adam实现并优化分类交叉熵损失函数,因为该问题在技术上是一个多类别分类问题。 @@ -1045,7 +1045,7 @@ Mean Accuracy: 95.70% * [长期短期记忆循环神经网络](https://machinelearningmastery.com/attention-long-short-term-memory-recurrent-neural-networks/)的注意事项 * [编解码器循环神经网络中的注意事项如何工作](https://machinelearningmastery.com/how-does-attention-work-in-encoder-decoder-recurrent-neural-networks/) -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) * [如何评估深度学习模型的技巧](https://machinelearningmastery.com/evaluate-skill-deep-learning-models/) * [如何在Keras中注意循环神经网络](https://medium.com/datalogue/attention-in-keras-1892773a4f22),2017。 * [keras-attention GitHub Project](https://github.com/datalogue/keras-attention) diff --git a/docs/lstm/encoder-decoder-long-short-term-memory-networks.md b/docs/lstm/encoder-decoder-long-short-term-memory-networks.md index b80d858e298e22006f6fa2826005303020f4bb19..ca84620d79c7199f76361678dcde394168eef4f5 100644 --- a/docs/lstm/encoder-decoder-long-short-term-memory-networks.md +++ b/docs/lstm/encoder-decoder-long-short-term-memory-networks.md @@ -1,4 +1,4 @@ -# 编解码器长短期存储器网络 +# 编解码器长短期记忆网络 > 原文: [https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) diff --git a/docs/lstm/global-attention-for-encoder-decoder-recurrent-neural-networks.md b/docs/lstm/global-attention-for-encoder-decoder-recurrent-neural-networks.md index c800bf534a7b3ac05df0b9dc9bdc9a80b541fbd7..ed0d8f6f5c9aa99a600d9b6ef0831aeabc765a09 100644 --- a/docs/lstm/global-attention-for-encoder-decoder-recurrent-neural-networks.md +++ b/docs/lstm/global-attention-for-encoder-decoder-recurrent-neural-networks.md @@ -158,7 +158,7 @@ * [用神经网络进行序列学习的序列](https://arxiv.org/abs/1409.3215),2014。 * [使用RNN编解码器进行统计机器翻译的学习短语表示](https://arxiv.org/abs/1406.1078),2014。 -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) ### 注意 diff --git a/docs/lstm/implementation-patterns-encoder-decoder-rnn-architecture-attention.md b/docs/lstm/implementation-patterns-encoder-decoder-rnn-architecture-attention.md index 9bdadc28be8546043b4e66a9527e5bd890a7f321..d103a32650aa3da269d76167bb41f5e9d8011641 100644 --- a/docs/lstm/implementation-patterns-encoder-decoder-rnn-architecture-attention.md +++ b/docs/lstm/implementation-patterns-encoder-decoder-rnn-architecture-attention.md @@ -36,7 +36,7 @@ 有关编解码器架构的更多信息,请参阅帖子: -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) ## 直接编解码器实现 @@ -175,7 +175,7 @@ Below is a depiction of this implementation. ### 帖子 -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) * [长期短期记忆循环神经网络](https://machinelearningmastery.com/attention-long-short-term-memory-recurrent-neural-networks/)的注意事项 * [编解码器循环神经网络中的注意事项如何工作](https://machinelearningmastery.com/how-does-attention-work-in-encoder-decoder-recurrent-neural-networks/) diff --git a/docs/lstm/lstm-autoencoders.md b/docs/lstm/lstm-autoencoders.md index 6c8b9dc571026244e6228d6c376ff100c520c835..39d2642fa22152028573f65fa61f05fdf4bc7685 100644 --- a/docs/lstm/lstm-autoencoders.md +++ b/docs/lstm/lstm-autoencoders.md @@ -80,7 +80,7 @@ LSTM网络可以组织成称为编解码器LSTM的架构,该架构允许该模 您可以在此处了解有关编解码器架构的更多信息 -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) ## 什么是LSTM自动编码器? @@ -508,7 +508,7 @@ print(yhat) 如果您希望深入了解,本节将提供有关该主题的更多资源。 * [用序列做出预测](https://machinelearningmastery.com/sequence-prediction/) -* [编解码器长短期存储器网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) +* [编解码器长短期记忆网络](https://machinelearningmastery.com/encoder-decoder-long-short-term-memory-networks/) * [自动编码器,维基百科](https://en.wikipedia.org/wiki/Autoencoder) * [使用LSTM进行视频表示的无监督学习](https://arxiv.org/abs/1502.04681),ArXiv 2015。 * [使用LSTM进行视频表示的无监督学习](http://proceedings.mlr.press/v37/srivastava15.pdf),PMLR,PDF,2015。