提交 91c4d221 编写于 作者: D dangqingqing

Adjust indent for jupyter

上级 0fe8d890
...@@ -225,6 +225,8 @@ We trained in the English Wikipedia language model to get a word vector lookup t ...@@ -225,6 +225,8 @@ We trained in the English Wikipedia language model to get a word vector lookup t
Get dictionary, print dictionary size: Get dictionary, print dictionary size:
```python ```python
import math
import numpy as np
import paddle.v2 as paddle import paddle.v2 as paddle
import paddle.v2.dataset.conll05 as conll05 import paddle.v2.dataset.conll05 as conll05
...@@ -233,81 +235,81 @@ word_dict_len = len(word_dict) ...@@ -233,81 +235,81 @@ word_dict_len = len(word_dict)
label_dict_len = len(label_dict) label_dict_len = len(label_dict)
pred_len = len(verb_dict) pred_len = len(verb_dict)
print len(word_dict_len) print word_dict_len
print len(label_dict_len) print label_dict_len
print len(pred_len) print pred_len
``` ```
## Model configuration ## Model configuration
1. Define input data dimensions and model hyperparameters. - 1. Define input data dimensions and model hyperparameters.
```python ```python
mark_dict_len = 2 # Value range of region mark. Region mark is either 0 or 1, so range is 2 mark_dict_len = 2 # Value range of region mark. Region mark is either 0 or 1, so range is 2
word_dim = 32 # word vector dimension word_dim = 32 # word vector dimension
mark_dim = 5 # adjacent dimension mark_dim = 5 # adjacent dimension
hidden_dim = 512 # the dimension of LSTM hidden layer vector is 128 (512/4) hidden_dim = 512 # the dimension of LSTM hidden layer vector is 128 (512/4)
depth = 8 # depth of stacked LSTM depth = 8 # depth of stacked LSTM
# There are 9 features per sample, so we will define 9 data layers. # There are 9 features per sample, so we will define 9 data layers.
# They type for each layer is integer_value_sequence. # They type for each layer is integer_value_sequence.
def d_type(value_range): def d_type(value_range):
return paddle.data_type.integer_value_sequence(value_range) return paddle.data_type.integer_value_sequence(value_range)
# word sequence # word sequence
word = paddle.layer.data(name='word_data', type=d_type(word_dict_len)) word = paddle.layer.data(name='word_data', type=d_type(word_dict_len))
# predicate # predicate
predicate = paddle.layer.data(name='verb_data', type=d_type(pred_len)) predicate = paddle.layer.data(name='verb_data', type=d_type(pred_len))
# 5 features for predicate context # 5 features for predicate context
ctx_n2 = paddle.layer.data(name='ctx_n2_data', type=d_type(word_dict_len)) ctx_n2 = paddle.layer.data(name='ctx_n2_data', type=d_type(word_dict_len))
ctx_n1 = paddle.layer.data(name='ctx_n1_data', type=d_type(word_dict_len)) ctx_n1 = paddle.layer.data(name='ctx_n1_data', type=d_type(word_dict_len))
ctx_0 = paddle.layer.data(name='ctx_0_data', type=d_type(word_dict_len)) ctx_0 = paddle.layer.data(name='ctx_0_data', type=d_type(word_dict_len))
ctx_p1 = paddle.layer.data(name='ctx_p1_data', type=d_type(word_dict_len)) ctx_p1 = paddle.layer.data(name='ctx_p1_data', type=d_type(word_dict_len))
ctx_p2 = paddle.layer.data(name='ctx_p2_data', type=d_type(word_dict_len)) ctx_p2 = paddle.layer.data(name='ctx_p2_data', type=d_type(word_dict_len))
# region marker sequence # region marker sequence
mark = paddle.layer.data(name='mark_data', type=d_type(mark_dict_len)) mark = paddle.layer.data(name='mark_data', type=d_type(mark_dict_len))
# label sequence # label sequence
target = paddle.layer.data(name='target', type=d_type(label_dict_len)) target = paddle.layer.data(name='target', type=d_type(label_dict_len))
``` ```
Speciala note: hidden_dim = 512 means LSTM hidden vector of 128 dimension (512/4). Please refer PaddlePaddle official documentation for detail: [lstmemory](http://www.paddlepaddle.org/doc/ui/api/trainer_config_helpers/layers.html#lstmemory) Speciala note: hidden_dim = 512 means LSTM hidden vector of 128 dimension (512/4). Please refer PaddlePaddle official documentation for detail: [lstmemory](http://www.paddlepaddle.org/doc/ui/api/trainer_config_helpers/layers.html#lstmemory)
2. The word sequence, predicate, predicate context, and region mark sequence are transformed into embedding vector sequences. - 2. The word sequence, predicate, predicate context, and region mark sequence are transformed into embedding vector sequences.
```python ```python
# Since word vectorlookup table is pre-trained, we won't update it this time. # Since word vectorlookup table is pre-trained, we won't update it this time.
# is_static being True prevents updating the lookup table during training. # is_static being True prevents updating the lookup table during training.
emb_para = paddle.attr.Param(name='emb', initial_std=0., is_static=True) emb_para = paddle.attr.Param(name='emb', initial_std=0., is_static=True)
# hyperparameter configurations # hyperparameter configurations
default_std = 1 / math.sqrt(hidden_dim) / 3.0 default_std = 1 / math.sqrt(hidden_dim) / 3.0
std_default = paddle.attr.Param(initial_std=default_std) std_default = paddle.attr.Param(initial_std=default_std)
std_0 = paddle.attr.Param(initial_std=0.) std_0 = paddle.attr.Param(initial_std=0.)
predicate_embedding = paddle.layer.embedding( predicate_embedding = paddle.layer.embedding(
size=word_dim, size=word_dim,
input=predicate, input=predicate,
param_attr=paddle.attr.Param( param_attr=paddle.attr.Param(
name='vemb', initial_std=default_std)) name='vemb', initial_std=default_std))
mark_embedding = paddle.layer.embedding( mark_embedding = paddle.layer.embedding(
size=mark_dim, input=mark, param_attr=std_0) size=mark_dim, input=mark, param_attr=std_0)
word_input = [word, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2] word_input = [word, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2]
emb_layers = [ emb_layers = [
paddle.layer.embedding( paddle.layer.embedding(
size=word_dim, input=x, param_attr=emb_para) for x in word_input size=word_dim, input=x, param_attr=emb_para) for x in word_input
] ]
emb_layers.append(predicate_embedding) emb_layers.append(predicate_embedding)
emb_layers.append(mark_embedding) emb_layers.append(mark_embedding)
``` ```
3. 8 LSTM units will be trained in "forward / backward" order. - 3. 8 LSTM units will be trained in "forward / backward" order.
```python ```python
hidden_0 = paddle.layer.mixed( hidden_0 = paddle.layer.mixed(
size=hidden_dim, size=hidden_dim,
bias_attr=std_default, bias_attr=std_default,
input=[ input=[
...@@ -315,12 +317,12 @@ print len(pred_len) ...@@ -315,12 +317,12 @@ print len(pred_len)
input=emb, param_attr=std_default) for emb in emb_layers input=emb, param_attr=std_default) for emb in emb_layers
]) ])
mix_hidden_lr = 1e-3 mix_hidden_lr = 1e-3
lstm_para_attr = paddle.attr.Param(initial_std=0.0, learning_rate=1.0) lstm_para_attr = paddle.attr.Param(initial_std=0.0, learning_rate=1.0)
hidden_para_attr = paddle.attr.Param( hidden_para_attr = paddle.attr.Param(
initial_std=default_std, learning_rate=mix_hidden_lr) initial_std=default_std, learning_rate=mix_hidden_lr)
lstm_0 = paddle.layer.lstmemory( lstm_0 = paddle.layer.lstmemory(
input=hidden_0, input=hidden_0,
act=paddle.activation.Relu(), act=paddle.activation.Relu(),
gate_act=paddle.activation.Sigmoid(), gate_act=paddle.activation.Sigmoid(),
...@@ -328,10 +330,10 @@ print len(pred_len) ...@@ -328,10 +330,10 @@ print len(pred_len)
bias_attr=std_0, bias_attr=std_0,
param_attr=lstm_para_attr) param_attr=lstm_para_attr)
# stack L-LSTM and R-LSTM with direct edges # stack L-LSTM and R-LSTM with direct edges
input_tmp = [hidden_0, lstm_0] input_tmp = [hidden_0, lstm_0]
for i in range(1, depth): for i in range(1, depth):
mix_hidden = paddle.layer.mixed( mix_hidden = paddle.layer.mixed(
size=hidden_dim, size=hidden_dim,
bias_attr=std_default, bias_attr=std_default,
...@@ -352,9 +354,9 @@ print len(pred_len) ...@@ -352,9 +354,9 @@ print len(pred_len)
param_attr=lstm_para_attr) param_attr=lstm_para_attr)
input_tmp = [mix_hidden, lstm] input_tmp = [mix_hidden, lstm]
``` ```
4. We will concatenate the output of top LSTM unit with it's input, and project into a hidden layer. Then put a fully connected layer on top of it to get the final vector representation. - 4. We will concatenate the output of top LSTM unit with it's input, and project into a hidden layer. Then put a fully connected layer on top of it to get the final vector representation.
```python ```python
feature_out = paddle.layer.mixed( feature_out = paddle.layer.mixed(
...@@ -368,10 +370,10 @@ print len(pred_len) ...@@ -368,10 +370,10 @@ print len(pred_len)
], ) ], )
``` ```
5. We use CRF as cost function, the parameter of CRF cost will be named `crfw`. - 5. We use CRF as cost function, the parameter of CRF cost will be named `crfw`.
```python ```python
crf_cost = paddle.layer.crf( crf_cost = paddle.layer.crf(
size=label_dict_len, size=label_dict_len,
input=feature_out, input=feature_out,
label=target, label=target,
...@@ -379,18 +381,18 @@ print len(pred_len) ...@@ -379,18 +381,18 @@ print len(pred_len)
name='crfw', name='crfw',
initial_std=default_std, initial_std=default_std,
learning_rate=mix_hidden_lr)) learning_rate=mix_hidden_lr))
``` ```
6. CRF decoding layer is used for evaluation and inference. It shares parameter with CRF layer. The sharing of parameters among multiple layers is specified by the same parameter name in these layers. - 6. CRF decoding layer is used for evaluation and inference. It shares parameter with CRF layer. The sharing of parameters among multiple layers is specified by the same parameter name in these layers.
```python ```python
crf_dec = paddle.layer.crf_decoding( crf_dec = paddle.layer.crf_decoding(
name='crf_dec_l', name='crf_dec_l',
size=label_dict_len, size=label_dict_len,
input=feature_out, input=feature_out,
label=target, label=target,
param_attr=paddle.attr.Param(name='crfw')) param_attr=paddle.attr.Param(name='crfw'))
``` ```
## Train model ## Train model
......
...@@ -206,7 +206,7 @@ print pred_len ...@@ -206,7 +206,7 @@ print pred_len
## 模型配置说明 ## 模型配置说明
1. 定义输入数据维度及模型超参数。 - 1. 定义输入数据维度及模型超参数。
```python ```python
mark_dict_len = 2 # 谓上下文区域标志的维度,是一个0-1 2值特征,因此维度为2 mark_dict_len = 2 # 谓上下文区域标志的维度,是一个0-1 2值特征,因此维度为2
...@@ -240,7 +240,7 @@ target = paddle.layer.data(name='target', type=d_type(label_dict_len)) ...@@ -240,7 +240,7 @@ target = paddle.layer.data(name='target', type=d_type(label_dict_len))
这里需要特别说明的是hidden_dim = 512指定了LSTM隐层向量的维度为128维,关于这一点请参考PaddlePaddle官方文档中[lstmemory](http://www.paddlepaddle.org/doc/ui/api/trainer_config_helpers/layers.html#lstmemory)的说明。 这里需要特别说明的是hidden_dim = 512指定了LSTM隐层向量的维度为128维,关于这一点请参考PaddlePaddle官方文档中[lstmemory](http://www.paddlepaddle.org/doc/ui/api/trainer_config_helpers/layers.html#lstmemory)的说明。
2. 将句子序列、谓词、谓词上下文、谓词上下文区域标记通过词表,转换为实向量表示的词向量序列。 - 2. 将句子序列、谓词、谓词上下文、谓词上下文区域标记通过词表,转换为实向量表示的词向量序列。
```python ```python
...@@ -269,7 +269,7 @@ emb_layers.append(predicate_embedding) ...@@ -269,7 +269,7 @@ emb_layers.append(predicate_embedding)
emb_layers.append(mark_embedding) emb_layers.append(mark_embedding)
``` ```
3. 8个LSTM单元以“正向/反向”的顺序对所有输入序列进行学习。 - 3. 8个LSTM单元以“正向/反向”的顺序对所有输入序列进行学习。
```python ```python
hidden_0 = paddle.layer.mixed( hidden_0 = paddle.layer.mixed(
...@@ -319,7 +319,7 @@ for i in range(1, depth): ...@@ -319,7 +319,7 @@ for i in range(1, depth):
input_tmp = [mix_hidden, lstm] input_tmp = [mix_hidden, lstm]
``` ```
4. 取最后一个栈式LSTM的输出和这个LSTM单元的输入到隐层映射,经过一个全连接层映射到标记字典的维度,得到最终的特征向量表示。 - 4. 取最后一个栈式LSTM的输出和这个LSTM单元的输入到隐层映射,经过一个全连接层映射到标记字典的维度,得到最终的特征向量表示。
```python ```python
feature_out = paddle.layer.mixed( feature_out = paddle.layer.mixed(
...@@ -333,7 +333,7 @@ input=[ ...@@ -333,7 +333,7 @@ input=[
], ) ], )
``` ```
5. 网络的末端定义CRF层计算损失(cost),指定参数名字为 `crfw`,该层需要输入正确的数据标签(target)。 - 5. 网络的末端定义CRF层计算损失(cost),指定参数名字为 `crfw`,该层需要输入正确的数据标签(target)。
```python ```python
crf_cost = paddle.layer.crf( crf_cost = paddle.layer.crf(
...@@ -346,7 +346,7 @@ crf_cost = paddle.layer.crf( ...@@ -346,7 +346,7 @@ crf_cost = paddle.layer.crf(
learning_rate=mix_hidden_lr)) learning_rate=mix_hidden_lr))
``` ```
6. CRF译码层和CRF层参数名字相同,即共享权重。如果输入了正确的数据标签(target),会统计错误标签的个数,可以用来评估模型。如果没有输入正确的数据标签,该层可以推到出最优解,可以用来预测模型。 - 6. CRF译码层和CRF层参数名字相同,即共享权重。如果输入了正确的数据标签(target),会统计错误标签的个数,可以用来评估模型。如果没有输入正确的数据标签,该层可以推到出最优解,可以用来预测模型。
```python ```python
crf_dec = paddle.layer.crf_decoding( crf_dec = paddle.layer.crf_decoding(
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册