未验证 提交 ac4c1233 编写于 作者: N Nicky Chan 提交者: GitHub

Merge pull request #555 from nickyfantasy/machine_translation_cn

Add chinese translation for Book chapter 8 machine translation
此差异已折叠。
...@@ -51,7 +51,7 @@ After training and with a beam-search size of 3, the generated translations are ...@@ -51,7 +51,7 @@ After training and with a beam-search size of 3, the generated translations are
## Overview of the Model ## Overview of the Model
This section will introduce Gated Recurrent Unit (GRU), Bi-directional Recurrent Neural Network, the Encoder-Decoder framework used in NMT, attention mechanism, as well as the beam search algorithm. This section will introduce Bi-directional Recurrent Neural Network, the Encoder-Decoder framework used in NMT, as well as the beam search algorithm.
### Bi-directional Recurrent Neural Network ### Bi-directional Recurrent Neural Network
...@@ -196,7 +196,6 @@ Then we implement encoder as follows: ...@@ -196,7 +196,6 @@ Then we implement encoder as follows:
```python ```python
def encoder(is_sparse): def encoder(is_sparse):
# encoder
src_word_id = pd.data( src_word_id = pd.data(
name="src_word_id", shape=[1], dtype='int64', lod_level=1) name="src_word_id", shape=[1], dtype='int64', lod_level=1)
src_embedding = pd.embedding( src_embedding = pd.embedding(
...@@ -216,7 +215,6 @@ Implement the decoder for training as follows: ...@@ -216,7 +215,6 @@ Implement the decoder for training as follows:
```python ```python
def train_decoder(context, is_sparse): def train_decoder(context, is_sparse):
# decoder
trg_language_word = pd.data( trg_language_word = pd.data(
name="target_language_word", shape=[1], dtype='int64', lod_level=1) name="target_language_word", shape=[1], dtype='int64', lod_level=1)
trg_embedding = pd.embedding( trg_embedding = pd.embedding(
......
...@@ -93,7 +93,7 @@ After training and with a beam-search size of 3, the generated translations are ...@@ -93,7 +93,7 @@ After training and with a beam-search size of 3, the generated translations are
## Overview of the Model ## Overview of the Model
This section will introduce Gated Recurrent Unit (GRU), Bi-directional Recurrent Neural Network, the Encoder-Decoder framework used in NMT, attention mechanism, as well as the beam search algorithm. This section will introduce Bi-directional Recurrent Neural Network, the Encoder-Decoder framework used in NMT, as well as the beam search algorithm.
### Bi-directional Recurrent Neural Network ### Bi-directional Recurrent Neural Network
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册