提交 d3064c26 编写于 作者: D Dang Qingqing

Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into quantize_op_fix2

*.pyc
train.log
output
data/cifar-10-batches-py/
data/cifar-10-python.tar.gz
data/*.txt
data/*.list
data/mean.meta
...@@ -10,9 +10,9 @@ ...@@ -10,9 +10,9 @@
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
image_classification/index.md image_classification/README.cn.md
word2vec/index.md word2vec/README.cn.md
recommender_system/index.md recommender_system/README.cn.md
understand_sentiment/index.md understand_sentiment/README.cn.md
label_semantic_roles/index.md label_semantic_roles/README.cn.md
machine_translation/index.md machine_translation/README.cn.md
data/train.list
data/test.*
data/conll05st-release.tar.gz
data/conll05st-release
data/predicate_dict
data/label_dict
data/word_dict
data/emb
data/feature
output
predict.res
train.log
data/wmt14
data/pre-wmt14
pretrained/wmt14_model
gen.log
gen_result
train.log
dataprovider_copy_1.py
*.pyc
multi-bleu.perl
...@@ -30,7 +30,9 @@ ...@@ -30,7 +30,9 @@
1 -6.23177 These are the light of hope and relief . <e> 1 -6.23177 These are the light of hope and relief . <e>
2 -7.7914 These are the light of hope and the relief of hope . <e> 2 -7.7914 These are the light of hope and the relief of hope . <e>
``` ```
- 左起第一列是生成句子的序号;左起第二列是该条句子的得分(从大到小),分值越高越好;左起第三列是生成的英语句子。 - 左起第一列是生成句子的序号;左起第二列是该条句子的得分(从大到小),分值越高越好;左起第三列是生成的英语句子。
- 另外有两个特殊标志:`<e>`表示句子的结尾,`<unk>`表示未登录词(unknown word),即未在训练字典中出现的词。 - 另外有两个特殊标志:`<e>`表示句子的结尾,`<unk>`表示未登录词(unknown word),即未在训练字典中出现的词。
## 模型概览 ## 模型概览
...@@ -78,18 +80,15 @@ ...@@ -78,18 +80,15 @@
机器翻译任务的训练过程中,解码阶段的目标是最大化下一个正确的目标语言词的概率。思路是: 机器翻译任务的训练过程中,解码阶段的目标是最大化下一个正确的目标语言词的概率。思路是:
1. 每一个时刻,根据源语言句子的编码信息(又叫上下文向量,context vector)`$c$`、真实目标语言序列的第`$i$`个词`$u_i$``$i$`时刻RNN的隐层状态`$z_i$`,计算出下一个隐层状态`$z_{i+1}$`。计算公式如下: 1. 每一个时刻,根据源语言句子的编码信息(又叫上下文向量,context vector)`$c$`、真实目标语言序列的第`$i$`个词`$u_i$``$i$`时刻RNN的隐层状态`$z_i$`,计算出下一个隐层状态`$z_{i+1}$`。计算公式如下:
$$z_{i+1}=\phi_{\theta '} \left ( c,u_i,z_i \right )$$
$$z_{i+1}=\phi _{\theta '}\left ( c,u_i,z_i \right )$$
其中`$\phi _{\theta '}$`是一个非线性激活函数;`$c=q\mathbf{h}$`是源语言句子的上下文向量,在不使用[注意力机制](#注意力机制)时,如果[编码器](#编码器)的输出是源语言句子编码后的最后一个元素,则可以定义`$c=h_T$``$u_i$`是目标语言序列的第`$i$`个单词,`$u_0$`是目标语言序列的开始标记`<s>`,表示解码开始;`$z_i$``$i$`时刻解码RNN的隐层状态,`$z_0$`是一个全零的向量。 其中`$\phi _{\theta '}$`是一个非线性激活函数;`$c=q\mathbf{h}$`是源语言句子的上下文向量,在不使用[注意力机制](#注意力机制)时,如果[编码器](#编码器)的输出是源语言句子编码后的最后一个元素,则可以定义`$c=h_T$``$u_i$`是目标语言序列的第`$i$`个单词,`$u_0$`是目标语言序列的开始标记`<s>`,表示解码开始;`$z_i$``$i$`时刻解码RNN的隐层状态,`$z_0$`是一个全零的向量。
2.`$z_{i+1}$`通过`softmax`归一化,得到目标语言序列的第`$i+1$`个单词的概率分布`$p_{i+1}$`。概率分布公式如下: 2.`$z_{i+1}$`通过`softmax`归一化,得到目标语言序列的第`$i+1$`个单词的概率分布`$p_{i+1}$`。概率分布公式如下:
$$p\left ( u_{i+1}|u_{&lt;i+1},\mathbf{x} \right )=softmax(W_sz_{i+1}+b_z)$$ $$p\left ( u_{i+1}|u_{&lt;i+1},\mathbf{x} \right )=softmax(W_sz_{i+1}+b_z)$$
其中`$W_sz_{i+1}+b_z$`是对每个可能的输出单词进行打分,再用softmax归一化就可以得到第`$i+1$`个词的概率`$p_{i+1}$` 其中`$W_sz_{i+1}+b_z$`是对每个可能的输出单词进行打分,再用softmax归一化就可以得到第`$i+1$`个词的概率`$p_{i+1}$`
3. 根据`$p_{i+1}$``$u_{i+1}$`计算代价。 3. 根据`$p_{i+1}$``$u_{i+1}$`计算代价。
4. 重复步骤1~3,直到目标语言序列中的所有词处理完毕。 4. 重复步骤1~3,直到目标语言序列中的所有词处理完毕。
机器翻译任务的生成过程,通俗来讲就是根据预先训练的模型来翻译源语言句子。生成过程中的解码阶段和上述训练过程的有所差异,具体介绍请见[柱搜索算法](#柱搜索算法) 机器翻译任务的生成过程,通俗来讲就是根据预先训练的模型来翻译源语言句子。生成过程中的解码阶段和上述训练过程的有所差异,具体介绍请见[柱搜索算法](#柱搜索算法)
...@@ -103,8 +102,11 @@ $$p\left ( u_{i+1}|u_{&lt;i+1},\mathbf{x} \right )=softmax(W_sz_{i+1}+b_z)$$ ...@@ -103,8 +102,11 @@ $$p\left ( u_{i+1}|u_{&lt;i+1},\mathbf{x} \right )=softmax(W_sz_{i+1}+b_z)$$
使用柱搜索算法的解码阶段,目标是最大化生成序列的概率。思路是: 使用柱搜索算法的解码阶段,目标是最大化生成序列的概率。思路是:
1. 每一个时刻,根据源语言句子的编码信息`$c$`、生成的第`$i$`个目标语言序列单词`$u_i$``$i$`时刻RNN的隐层状态`$z_i$`,计算出下一个隐层状态`$z_{i+1}$` 1. 每一个时刻,根据源语言句子的编码信息`$c$`、生成的第`$i$`个目标语言序列单词`$u_i$``$i$`时刻RNN的隐层状态`$z_i$`,计算出下一个隐层状态`$z_{i+1}$`
2.`$z_{i+1}$`通过`softmax`归一化,得到目标语言序列的第`$i+1$`个单词的概率分布`$p_{i+1}$` 2.`$z_{i+1}$`通过`softmax`归一化,得到目标语言序列的第`$i+1$`个单词的概率分布`$p_{i+1}$`
3. 根据`$p_{i+1}$`采样出单词`$u_{i+1}$` 3. 根据`$p_{i+1}$`采样出单词`$u_{i+1}$`
4. 重复步骤1~3,直到获得句子结束标记`<e>`或超过句子的最大生成长度为止。 4. 重复步骤1~3,直到获得句子结束标记`<e>`或超过句子的最大生成长度为止。
注意:`$z_{i+1}$``$p_{i+1}$`的计算公式同[解码器](#解码器)中的一样。且由于生成时的每一步都是通过贪心法实现的,因此并不能保证得到全局最优解。 注意:`$z_{i+1}$``$p_{i+1}$`的计算公式同[解码器](#解码器)中的一样。且由于生成时的每一步都是通过贪心法实现的,因此并不能保证得到全局最优解。
...@@ -116,9 +118,13 @@ $$p\left ( u_{i+1}|u_{&lt;i+1},\mathbf{x} \right )=softmax(W_sz_{i+1}+b_z)$$ ...@@ -116,9 +118,13 @@ $$p\left ( u_{i+1}|u_{&lt;i+1},\mathbf{x} \right )=softmax(W_sz_{i+1}+b_z)$$
### 数据预处理 ### 数据预处理
我们的预处理流程包括两步: 我们的预处理流程包括两步:
- 将每个源语言到目标语言的平行语料库文件合并为一个文件: - 将每个源语言到目标语言的平行语料库文件合并为一个文件:
- 合并每个`XXX.src``XXX.trg`文件为`XXX` - 合并每个`XXX.src``XXX.trg`文件为`XXX`
- `XXX`中的第`$i$`行内容为`XXX.src`中的第`$i$`行和`XXX.trg`中的第`$i$`行连接,用'\t'分隔。 - `XXX`中的第`$i$`行内容为`XXX.src`中的第`$i$`行和`XXX.trg`中的第`$i$`行连接,用'\t'分隔。
- 创建训练数据的“源字典”和“目标字典”。每个字典都有**DICTSIZE**个单词,包括:语料中词频最高的(DICTSIZE - 3)个单词,和3个特殊符号`<s>`(序列的开始)、`<e>`(序列的结束)和`<unk>`(未登录词)。 - 创建训练数据的“源字典”和“目标字典”。每个字典都有**DICTSIZE**个单词,包括:语料中词频最高的(DICTSIZE - 3)个单词,和3个特殊符号`<s>`(序列的开始)、`<e>`(序列的结束)和`<unk>`(未登录词)。
### 示例数据 ### 示例数据
...@@ -132,6 +138,7 @@ $$p\left ( u_{i+1}|u_{&lt;i+1},\mathbf{x} \right )=softmax(W_sz_{i+1}+b_z)$$ ...@@ -132,6 +138,7 @@ $$p\left ( u_{i+1}|u_{&lt;i+1},\mathbf{x} \right )=softmax(W_sz_{i+1}+b_z)$$
下面我们开始根据输入数据的形式配置模型。首先引入所需的库函数以及定义全局变量。 下面我们开始根据输入数据的形式配置模型。首先引入所需的库函数以及定义全局变量。
```python ```python
from __future__ import print_function
import contextlib import contextlib
import numpy as np import numpy as np
...@@ -157,139 +164,152 @@ decoder_size = hidden_dim ...@@ -157,139 +164,152 @@ decoder_size = hidden_dim
然后如下实现编码器框架: 然后如下实现编码器框架:
```python ```python
def encoder(is_sparse): def encoder(is_sparse):
src_word_id = pd.data( src_word_id = pd.data(
name="src_word_id", shape=[1], dtype='int64', lod_level=1) name="src_word_id", shape=[1], dtype='int64', lod_level=1)
src_embedding = pd.embedding( src_embedding = pd.embedding(
input=src_word_id, input=src_word_id,
size=[dict_size, word_dim], size=[dict_size, word_dim],
dtype='float32', dtype='float32',
is_sparse=is_sparse, is_sparse=is_sparse,
param_attr=fluid.ParamAttr(name='vemb')) param_attr=fluid.ParamAttr(name='vemb'))
fc1 = pd.fc(input=src_embedding, size=hidden_dim * 4, act='tanh') fc1 = pd.fc(input=src_embedding, size=hidden_dim * 4, act='tanh')
lstm_hidden0, lstm_0 = pd.dynamic_lstm(input=fc1, size=hidden_dim * 4) lstm_hidden0, lstm_0 = pd.dynamic_lstm(input=fc1, size=hidden_dim * 4)
encoder_out = pd.sequence_last_step(input=lstm_hidden0) encoder_out = pd.sequence_last_step(input=lstm_hidden0)
return encoder_out return encoder_out
``` ```
再实现训练模式下的解码器: 再实现训练模式下的解码器:
```python ```python
def train_decoder(context, is_sparse): def train_decoder(context, is_sparse):
trg_language_word = pd.data( trg_language_word = pd.data(
name="target_language_word", shape=[1], dtype='int64', lod_level=1) name="target_language_word", shape=[1], dtype='int64', lod_level=1)
trg_embedding = pd.embedding( trg_embedding = pd.embedding(
input=trg_language_word, input=trg_language_word,
size=[dict_size, word_dim], size=[dict_size, word_dim],
dtype='float32', dtype='float32',
is_sparse=is_sparse, is_sparse=is_sparse,
param_attr=fluid.ParamAttr(name='vemb')) param_attr=fluid.ParamAttr(name='vemb'))
rnn = pd.DynamicRNN() rnn = pd.DynamicRNN()
with rnn.block(): with rnn.block():
current_word = rnn.step_input(trg_embedding) current_word = rnn.step_input(trg_embedding)
pre_state = rnn.memory(init=context) pre_state = rnn.memory(init=context)
current_state = pd.fc(input=[current_word, pre_state], current_state = pd.fc(input=[current_word, pre_state],
size=decoder_size, size=decoder_size,
act='tanh') act='tanh')
current_score = pd.fc(input=current_state, current_score = pd.fc(input=current_state,
size=target_dict_dim, size=target_dict_dim,
act='softmax') act='softmax')
rnn.update_memory(pre_state, current_state) rnn.update_memory(pre_state, current_state)
rnn.output(current_score) rnn.output(current_score)
return rnn() return rnn()
``` ```
实现推测模式下的解码器: 实现推测模式下的解码器:
```python ```python
def decode(context, is_sparse): def decode(context, is_sparse):
init_state = context init_state = context
array_len = pd.fill_constant(shape=[1], dtype='int64', value=max_length) array_len = pd.fill_constant(shape=[1], dtype='int64', value=max_length)
counter = pd.zeros(shape=[1], dtype='int64', force_cpu=True) counter = pd.zeros(shape=[1], dtype='int64', force_cpu=True)
# fill the first element with init_state # fill the first element with init_state
state_array = pd.create_array('float32') state_array = pd.create_array('float32')
pd.array_write(init_state, array=state_array, i=counter) pd.array_write(init_state, array=state_array, i=counter)
# ids, scores as memory # ids, scores as memory
ids_array = pd.create_array('int64') ids_array = pd.create_array('int64')
scores_array = pd.create_array('float32') scores_array = pd.create_array('float32')
init_ids = pd.data(name="init_ids", shape=[1], dtype="int64", lod_level=2) init_ids = pd.data(name="init_ids", shape=[1], dtype="int64", lod_level=2)
init_scores = pd.data( init_scores = pd.data(
name="init_scores", shape=[1], dtype="float32", lod_level=2) name="init_scores", shape=[1], dtype="float32", lod_level=2)
pd.array_write(init_ids, array=ids_array, i=counter) pd.array_write(init_ids, array=ids_array, i=counter)
pd.array_write(init_scores, array=scores_array, i=counter) pd.array_write(init_scores, array=scores_array, i=counter)
cond = pd.less_than(x=counter, y=array_len) cond = pd.less_than(x=counter, y=array_len)
while_op = pd.While(cond=cond) while_op = pd.While(cond=cond)
with while_op.block(): with while_op.block():
pre_ids = pd.array_read(array=ids_array, i=counter) pre_ids = pd.array_read(array=ids_array, i=counter)
pre_state = pd.array_read(array=state_array, i=counter) pre_state = pd.array_read(array=state_array, i=counter)
pre_score = pd.array_read(array=scores_array, i=counter) pre_score = pd.array_read(array=scores_array, i=counter)
# expand the lod of pre_state to be the same with pre_score # expand the lod of pre_state to be the same with pre_score
pre_state_expanded = pd.sequence_expand(pre_state, pre_score) pre_state_expanded = pd.sequence_expand(pre_state, pre_score)
pre_ids_emb = pd.embedding( pre_ids_emb = pd.embedding(
input=pre_ids, input=pre_ids,
size=[dict_size, word_dim], size=[dict_size, word_dim],
dtype='float32', dtype='float32',
is_sparse=is_sparse) is_sparse=is_sparse)
# use rnn unit to update rnn # use rnn unit to update rnn
current_state = pd.fc(input=[pre_state_expanded, pre_ids_emb], current_state = pd.fc(input=[pre_state_expanded, pre_ids_emb],
size=decoder_size, size=decoder_size,
act='tanh') act='tanh')
current_state_with_lod = pd.lod_reset(x=current_state, y=pre_score) current_state_with_lod = pd.lod_reset(x=current_state, y=pre_score)
# use score to do beam search # use score to do beam search
current_score = pd.fc(input=current_state_with_lod, current_score = pd.fc(input=current_state_with_lod,
size=target_dict_dim, size=target_dict_dim,
act='softmax') act='softmax')
topk_scores, topk_indices = pd.topk(current_score, k=topk_size) topk_scores, topk_indices = pd.topk(current_score, k=beam_size)
selected_ids, selected_scores = pd.beam_search( # calculate accumulated scores after topk to reduce computation cost
pre_ids, topk_indices, topk_scores, beam_size, end_id=10, level=0) accu_scores = pd.elementwise_add(
x=pd.log(topk_scores), y=pd.reshape(pre_score, shape=[-1]), axis=0)
pd.increment(x=counter, value=1, in_place=True) selected_ids, selected_scores = pd.beam_search(
pre_ids,
# update the memories pre_score,
pd.array_write(current_state, array=state_array, i=counter) topk_indices,
pd.array_write(selected_ids, array=ids_array, i=counter) accu_scores,
pd.array_write(selected_scores, array=scores_array, i=counter) beam_size,
end_id=10,
pd.less_than(x=counter, y=array_len, cond=cond) level=0)
translation_ids, translation_scores = pd.beam_search_decode( pd.increment(x=counter, value=1, in_place=True)
ids=ids_array, scores=scores_array)
# update the memories
return translation_ids, translation_scores pd.array_write(current_state, array=state_array, i=counter)
pd.array_write(selected_ids, array=ids_array, i=counter)
pd.array_write(selected_scores, array=scores_array, i=counter)
# update the break condition: up to the max length or all candidates of
# source sentences have ended.
length_cond = pd.less_than(x=counter, y=array_len)
finish_cond = pd.logical_not(pd.is_empty(x=selected_ids))
pd.logical_and(x=length_cond, y=finish_cond, out=cond)
translation_ids, translation_scores = pd.beam_search_decode(
ids=ids_array, scores=scores_array, beam_size=beam_size, end_id=10)
return translation_ids, translation_scores
``` ```
进而,我们定义一个`train_program`来使用`inference_program`计算出的结果,在标记数据的帮助下来计算误差。我们还定义了一个`optimizer_func`来定义优化器。 进而,我们定义一个`train_program`来使用`inference_program`计算出的结果,在标记数据的帮助下来计算误差。我们还定义了一个`optimizer_func`来定义优化器。
```python ```python
def train_program(is_sparse): def train_program(is_sparse):
context = encoder(is_sparse) context = encoder(is_sparse)
rnn_out = train_decoder(context, is_sparse) rnn_out = train_decoder(context, is_sparse)
label = pd.data( label = pd.data(
name="target_language_next_word", shape=[1], dtype='int64', lod_level=1) name="target_language_next_word", shape=[1], dtype='int64', lod_level=1)
cost = pd.cross_entropy(input=rnn_out, label=label) cost = pd.cross_entropy(input=rnn_out, label=label)
avg_cost = pd.mean(cost) avg_cost = pd.mean(cost)
return avg_cost return avg_cost
def optimizer_func(): def optimizer_func():
return fluid.optimizer.Adagrad( return fluid.optimizer.Adagrad(
learning_rate=1e-4, learning_rate=1e-4,
regularization=fluid.regularizer.L2DecayRegularizer( regularization=fluid.regularizer.L2DecayRegularizer(
regularization_coeff=0.1)) regularization_coeff=0.1))
``` ```
## 训练模型 ## 训练模型
...@@ -307,9 +327,9 @@ place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace() ...@@ -307,9 +327,9 @@ place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
```python ```python
train_reader = paddle.batch( train_reader = paddle.batch(
paddle.reader.shuffle( paddle.reader.shuffle(
paddle.dataset.wmt14.train(dict_size), buf_size=1000), paddle.dataset.wmt14.train(dict_size), buf_size=1000),
batch_size=batch_size) batch_size=batch_size)
``` ```
### 构造训练器(trainer) ### 构造训练器(trainer)
...@@ -318,9 +338,9 @@ batch_size=batch_size) ...@@ -318,9 +338,9 @@ batch_size=batch_size)
```python ```python
is_sparse = False is_sparse = False
trainer = fluid.Trainer( trainer = fluid.Trainer(
train_func=partial(train_program, is_sparse), train_func=partial(train_program, is_sparse),
place=place, place=place,
optimizer_func=optimizer_func) optimizer_func=optimizer_func)
``` ```
### 提供数据 ### 提供数据
...@@ -329,8 +349,8 @@ optimizer_func=optimizer_func) ...@@ -329,8 +349,8 @@ optimizer_func=optimizer_func)
```python ```python
feed_order = [ feed_order = [
'src_word_id', 'target_language_word', 'target_language_next_word' 'src_word_id', 'target_language_word', 'target_language_next_word'
] ]
``` ```
### 事件处理器 ### 事件处理器
...@@ -338,12 +358,12 @@ feed_order = [ ...@@ -338,12 +358,12 @@ feed_order = [
```python ```python
def event_handler(event): def event_handler(event):
if isinstance(event, fluid.EndStepEvent): if isinstance(event, fluid.EndStepEvent):
if event.step % 10 == 0: if event.step % 10 == 0:
print('pass_id=' + str(event.epoch) + ' batch=' + str(event.step)) print('pass_id=' + str(event.epoch) + ' batch=' + str(event.step))
if event.step == 20: if event.step == 20:
trainer.stop() trainer.stop()
``` ```
### 开始训练 ### 开始训练
...@@ -353,10 +373,10 @@ trainer.stop() ...@@ -353,10 +373,10 @@ trainer.stop()
EPOCH_NUM = 1 EPOCH_NUM = 1
trainer.train( trainer.train(
reader=train_reader, reader=train_reader,
num_epochs=EPOCH_NUM, num_epochs=EPOCH_NUM,
event_handler=event_handler, event_handler=event_handler,
feed_order=feed_order) feed_order=feed_order)
``` ```
## 应用模型 ## 应用模型
...@@ -377,7 +397,7 @@ translation_ids, translation_scores = decode(context, is_sparse) ...@@ -377,7 +397,7 @@ translation_ids, translation_scores = decode(context, is_sparse)
```python ```python
init_ids_data = np.array([1 for _ in range(batch_size)], dtype='int64') init_ids_data = np.array([1 for _ in range(batch_size)], dtype='int64')
init_scores_data = np.array( init_scores_data = np.array(
[1. for _ in range(batch_size)], dtype='float32') [1. for _ in range(batch_size)], dtype='float32')
init_ids_data = init_ids_data.reshape((batch_size, 1)) init_ids_data = init_ids_data.reshape((batch_size, 1))
init_scores_data = init_scores_data.reshape((batch_size, 1)) init_scores_data = init_scores_data.reshape((batch_size, 1))
init_lod = [1] * batch_size init_lod = [1] * batch_size
...@@ -387,14 +407,14 @@ init_ids = fluid.create_lod_tensor(init_ids_data, init_lod, place) ...@@ -387,14 +407,14 @@ init_ids = fluid.create_lod_tensor(init_ids_data, init_lod, place)
init_scores = fluid.create_lod_tensor(init_scores_data, init_lod, place) init_scores = fluid.create_lod_tensor(init_scores_data, init_lod, place)
test_data = paddle.batch( test_data = paddle.batch(
paddle.reader.shuffle( paddle.reader.shuffle(
paddle.dataset.wmt14.test(dict_size), buf_size=1000), paddle.dataset.wmt14.test(dict_size), buf_size=1000),
batch_size=batch_size) batch_size=batch_size)
feed_order = ['src_word_id'] feed_order = ['src_word_id']
feed_list = [ feed_list = [
framework.default_main_program().global_block().var(var_name) framework.default_main_program().global_block().var(var_name)
for var_name in feed_order for var_name in feed_order
] ]
feeder = fluid.DataFeeder(feed_list, place) feeder = fluid.DataFeeder(feed_list, place)
...@@ -409,27 +429,30 @@ exe = Executor(place) ...@@ -409,27 +429,30 @@ exe = Executor(place)
exe.run(framework.default_startup_program()) exe.run(framework.default_startup_program())
for data in test_data(): for data in test_data():
feed_data = map(lambda x: [x[0]], data) feed_data = map(lambda x: [x[0]], data)
feed_dict = feeder.feed(feed_data) feed_dict = feeder.feed(feed_data)
feed_dict['init_ids'] = init_ids feed_dict['init_ids'] = init_ids
feed_dict['init_scores'] = init_scores feed_dict['init_scores'] = init_scores
results = exe.run( results = exe.run(
framework.default_main_program(), framework.default_main_program(),
feed=feed_dict, feed=feed_dict,
fetch_list=[translation_ids, translation_scores], fetch_list=[translation_ids, translation_scores],
return_numpy=False) return_numpy=False)
result_ids = np.array(results[0]) result_ids = np.array(results[0])
result_scores = np.array(results[1]) result_scores = np.array(results[1])
print("Original sentence:") print("Original sentence:")
print(" ".join([src_dict[w] for w in feed_data[0][0]])) print(" ".join([src_dict[w] for w in feed_data[0][0][1:-1]]))
print("Translated sentence:") print("Translated score and sentence:")
print(" ".join([trg_dict[w] for w in result_ids])) for i in xrange(beam_size):
print("Corresponding score: ", result_scores) start_pos = result_ids_lod[1][i] + 1
end_pos = result_ids_lod[1][i+1]
break print("%d\t%.4f\t%s\n" % (i+1, result_scores[end_pos-1],
" ".join([trg_dict[w] for w in result_ids[start_pos:end_pos]])))
break
``` ```
## 总结 ## 总结
......
data/aclImdb
data/imdb
data/pre-imdb
data/mosesdecoder-master
*.log
model_output
dataprovider_copy_1.py
model.list
*.pyc
.DS_Store
# 情感分析 # 情感分析
本教程源代码目录在[book/understand_sentiment](https://github.com/PaddlePaddle/book/tree/develop/06.understand_sentiment), 初次使用请参考PaddlePaddle[安装教程](https://github.com/PaddlePaddle/book/blob/develop/README.cn.md#运行这本书) 本教程源代码目录在[book/understand_sentiment](https://github.com/PaddlePaddle/book/tree/develop/06.understand_sentiment), 初次使用请参考PaddlePaddle[安装教程](https://github.com/PaddlePaddle/book/blob/develop/README.cn.md#运行这本书),更多内容请参考本教程的[视频课堂](http://bit.baidu.com/course/detail/id/177.html)
## 背景介绍 ## 背景介绍
...@@ -36,54 +36,54 @@ ...@@ -36,54 +36,54 @@
循环神经网络是一种能对序列数据进行精确建模的有力工具。实际上,循环神经网络的理论计算能力是图灵完备的\[[4](#参考文献)\]。自然语言是一种典型的序列数据(词序列),近年来,循环神经网络及其变体(如long short term memory\[[5](#参考文献)\]等)在自然语言处理的多个领域,如语言模型、句法解析、语义角色标注(或一般的序列标注)、语义表示、图文生成、对话、机器翻译等任务上均表现优异甚至成为目前效果最好的方法。 循环神经网络是一种能对序列数据进行精确建模的有力工具。实际上,循环神经网络的理论计算能力是图灵完备的\[[4](#参考文献)\]。自然语言是一种典型的序列数据(词序列),近年来,循环神经网络及其变体(如long short term memory\[[5](#参考文献)\]等)在自然语言处理的多个领域,如语言模型、句法解析、语义角色标注(或一般的序列标注)、语义表示、图文生成、对话、机器翻译等任务上均表现优异甚至成为目前效果最好的方法。
![rnn](./image/rnn.png)
<p align="center"> <p align="center">
<img src="image/rnn.png" width = "60%" align="center"/><br/>
图1. 循环神经网络按时间展开的示意图 图1. 循环神经网络按时间展开的示意图
</p> </p>
循环神经网络按时间展开后如图1所示:在第`$t$`时刻,网络读入第`$t$`个输入`$x_t$`(向量表示)及前一时刻隐层的状态值`$h_{t-1}$`(向量表示,`$h_0$`一般初始化为`$0$`向量),计算得出本时刻隐层的状态值`$h_t$`,重复这一步骤直至读完所有输入。如果将循环神经网络所表示的函数记为`$f$`,则其公式可表示为: 循环神经网络按时间展开后如图1所示:在第$t$时刻,网络读入第$t$个输入$x_t$(向量表示)及前一时刻隐层的状态值$h_{t-1}$(向量表示,$h_0$一般初始化为$0$向量),计算得出本时刻隐层的状态值$h_t$,重复这一步骤直至读完所有输入。如果将循环神经网络所表示的函数记为$f$,则其公式可表示为:
$$h_t=f(x_t,h_{t-1})=\sigma(W_{xh}x_t+W_{hh}h_{t-1}+b_h)$$ $$h_t=f(x_t,h_{t-1})=\sigma(W_{xh}x_t+W_{hh}h_{t-1}+b_h)$$
其中`$W_{xh}$`是输入到隐层的矩阵参数,`$W_{hh}$`是隐层到隐层的矩阵参数,`$b_h$`为隐层的偏置向量(bias)参数,`$\sigma$``$sigmoid$`函数。 其中$W_{xh}$是输入到隐层的矩阵参数,$W_{hh}$是隐层到隐层的矩阵参数,$b_h$为隐层的偏置向量(bias)参数,$\sigma$为$sigmoid$函数。
在处理自然语言时,一般会先将词(one-hot表示)映射为其词向量(word embedding)表示,然后再作为循环神经网络每一时刻的输入`$x_t$`。此外,可以根据实际需要的不同在循环神经网络的隐层上连接其它层。如,可以把一个循环神经网络的隐层输出连接至下一个循环神经网络的输入构建深层(deep or stacked)循环神经网络,或者提取最后一个时刻的隐层状态作为句子表示进而使用分类模型等等。 在处理自然语言时,一般会先将词(one-hot表示)映射为其词向量(word embedding)表示,然后再作为循环神经网络每一时刻的输入$x_t$。此外,可以根据实际需要的不同在循环神经网络的隐层上连接其它层。如,可以把一个循环神经网络的隐层输出连接至下一个循环神经网络的输入构建深层(deep or stacked)循环神经网络,或者提取最后一个时刻的隐层状态作为句子表示进而使用分类模型等等。
### 长短期记忆网络(LSTM) ### 长短期记忆网络(LSTM)
对于较长的序列数据,循环神经网络的训练过程中容易出现梯度消失或爆炸现象\[[6](#参考文献)\]。为了解决这一问题,Hochreiter S, Schmidhuber J. (1997)提出了LSTM(long short term memory\[[5](#参考文献)\])。 对于较长的序列数据,循环神经网络的训练过程中容易出现梯度消失或爆炸现象\[[6](#参考文献)\]。为了解决这一问题,Hochreiter S, Schmidhuber J. (1997)提出了LSTM(long short term memory\[[5](#参考文献)\])。
相比于简单的循环神经网络,LSTM增加了记忆单元`$c$`、输入门`$i$`、遗忘门`$f$`及输出门`$o$`。这些门及记忆单元组合起来大大提升了循环神经网络处理长序列数据的能力。若将基于LSTM的循环神经网络表示的函数记为`$F$`,则其公式为: 相比于简单的循环神经网络,LSTM增加了记忆单元$c$、输入门$i$、遗忘门$f$及输出门$o$。这些门及记忆单元组合起来大大提升了循环神经网络处理长序列数据的能力。若将基于LSTM的循环神经网络表示的函数记为$F$,则其公式为:
$$ h_t=F(x_t,h_{t-1})$$ $$ h_t=F(x_t,h_{t-1})$$
`$F$`由下列公式组合而成\[[7](#参考文献)\] $F$由下列公式组合而成\[[7](#参考文献)\]
$$ i_t = \sigma{(W_{xi}x_t+W_{hi}h_{t-1}+W_{ci}c_{t-1}+b_i)} $$ $$ i_t = \sigma{(W_{xi}x_t+W_{hi}h_{t-1}+W_{ci}c_{t-1}+b_i)} $$
$$ f_t = \sigma(W_{xf}x_t+W_{hf}h_{t-1}+W_{cf}c_{t-1}+b_f) $$ $$ f_t = \sigma(W_{xf}x_t+W_{hf}h_{t-1}+W_{cf}c_{t-1}+b_f) $$
$$ c_t = f_t\odot c_{t-1}+i_t\odot tanh(W_{xc}x_t+W_{hc}h_{t-1}+b_c) $$ $$ c_t = f_t\odot c_{t-1}+i_t\odot tanh(W_{xc}x_t+W_{hc}h_{t-1}+b_c) $$
$$ o_t = \sigma(W_{xo}x_t+W_{ho}h_{t-1}+W_{co}c_{t}+b_o) $$ $$ o_t = \sigma(W_{xo}x_t+W_{ho}h_{t-1}+W_{co}c_{t}+b_o) $$
$$ h_t = o_t\odot tanh(c_t) $$ $$ h_t = o_t\odot tanh(c_t) $$
其中,`$i_t, f_t, c_t, o_t$`分别表示输入门,遗忘门,记忆单元及输出门的向量值,带角标的`$W$``$b$`为模型参数,`$tanh$`为双曲正切函数,`$\odot$`表示逐元素(elementwise)的乘法操作。输入门控制着新输入进入记忆单元`$c$`的强度,遗忘门控制着记忆单元维持上一时刻值的强度,输出门控制着输出记忆单元的强度。三种门的计算方式类似,但有着完全不同的参数,它们各自以不同的方式控制着记忆单元`$c$`,如图2所示: 其中,$i_t, f_t, c_t, o_t$分别表示输入门,遗忘门,记忆单元及输出门的向量值,带角标的$W$及$b$为模型参数,$tanh$为双曲正切函数,$\odot$表示逐元素(elementwise)的乘法操作。输入门控制着新输入进入记忆单元$c$的强度,遗忘门控制着记忆单元维持上一时刻值的强度,输出门控制着输出记忆单元的强度。三种门的计算方式类似,但有着完全不同的参数,它们各自以不同的方式控制着记忆单元$c$,如图2所示:
![lstm](./image/lstm.png)
<p align="center"> <p align="center">
图2. 时刻`$t$`的LSTM [7] <img src="image/lstm.png" width = "65%" align="center"/><br/>
图2. 时刻$t$的LSTM [7]
</p> </p>
LSTM通过给简单的循环神经网络增加记忆及控制门的方式,增强了其处理远距离依赖问题的能力。类似原理的改进还有Gated Recurrent Unit (GRU)\[[8](#参考文献)\],其设计更为简洁一些。**这些改进虽然各有不同,但是它们的宏观描述却与简单的循环神经网络一样(如图2所示),即隐状态依据当前输入及前一时刻的隐状态来改变,不断地循环这一过程直至输入处理完毕:** LSTM通过给简单的循环神经网络增加记忆及控制门的方式,增强了其处理远距离依赖问题的能力。类似原理的改进还有Gated Recurrent Unit (GRU)\[[8](#参考文献)\],其设计更为简洁一些。**这些改进虽然各有不同,但是它们的宏观描述却与简单的循环神经网络一样(如图2所示),即隐状态依据当前输入及前一时刻的隐状态来改变,不断地循环这一过程直至输入处理完毕:**
$$ h_t=Recrurent(x_t,h_{t-1})$$ $$ h_t=Recrurent(x_t,h_{t-1})$$
其中,`$Recrurent$`可以表示简单的循环神经网络、GRU或LSTM。 其中,$Recrurent$可以表示简单的循环神经网络、GRU或LSTM。
### 栈式双向LSTM(Stacked Bidirectional LSTM) ### 栈式双向LSTM(Stacked Bidirectional LSTM)
对于正常顺序的循环神经网络,`$h_t$`包含了`$t$`时刻之前的输入信息,也就是上文信息。同样,为了得到下文信息,我们可以使用反方向(将输入逆序处理)的循环神经网络。结合构建深层循环神经网络的方法(深层神经网络往往能得到更抽象和高级的特征表示),我们可以通过构建更加强有力的基于LSTM的栈式双向循环神经网络\[[9](#参考文献)\],来对时序数据进行建模。 对于正常顺序的循环神经网络,$h_t$包含了$t$时刻之前的输入信息,也就是上文信息。同样,为了得到下文信息,我们可以使用反方向(将输入逆序处理)的循环神经网络。结合构建深层循环神经网络的方法(深层神经网络往往能得到更抽象和高级的特征表示),我们可以通过构建更加强有力的基于LSTM的栈式双向循环神经网络\[[9](#参考文献)\],来对时序数据进行建模。
如图3所示(以三层为例),奇数层LSTM正向,偶数层LSTM反向,高一层的LSTM使用低一层LSTM及之前所有层的信息作为输入,对最高层LSTM序列使用时间维度上的最大池化即可得到文本的定长向量表示(这一表示充分融合了文本的上下文信息,并且对文本进行了深层次抽象),最后我们将文本表示连接至softmax构建分类模型。 如图3所示(以三层为例),奇数层LSTM正向,偶数层LSTM反向,高一层的LSTM使用低一层LSTM及之前所有层的信息作为输入,对最高层LSTM序列使用时间维度上的最大池化即可得到文本的定长向量表示(这一表示充分融合了文本的上下文信息,并且对文本进行了深层次抽象),最后我们将文本表示连接至softmax构建分类模型。
![stacked_lstm](./image/stacked_lstm.jpg)
<p align="center"> <p align="center">
<img src="image/stacked_lstm.jpg" width=450><br/>
图3. 栈式双向LSTM用于文本分类 图3. 栈式双向LSTM用于文本分类
</p> </p>
...@@ -94,11 +94,11 @@ $$ h_t=Recrurent(x_t,h_{t-1})$$ ...@@ -94,11 +94,11 @@ $$ h_t=Recrurent(x_t,h_{t-1})$$
```text ```text
aclImdb aclImdb
|- test |- test
|-- neg |-- neg
|-- pos |-- pos
|- train |- train
|-- neg |-- neg
|-- pos |-- pos
``` ```
Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取,并提供了读取字典、训练数据、测试数据等API。 Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取,并提供了读取字典、训练数据、测试数据等API。
...@@ -107,6 +107,7 @@ Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取 ...@@ -107,6 +107,7 @@ Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取
在该示例中,我们实现了两种文本分类算法,分别基于[推荐系统](https://github.com/PaddlePaddle/book/tree/develop/05.recommender_system)一节介绍过的文本卷积神经网络,以及[栈式双向LSTM](#栈式双向LSTM(Stacked Bidirectional LSTM))。我们首先引入要用到的库和定义全局变量: 在该示例中,我们实现了两种文本分类算法,分别基于[推荐系统](https://github.com/PaddlePaddle/book/tree/develop/05.recommender_system)一节介绍过的文本卷积神经网络,以及[栈式双向LSTM](#栈式双向LSTM(Stacked Bidirectional LSTM))。我们首先引入要用到的库和定义全局变量:
```python ```python
from __future__ import print_function
import paddle import paddle
import paddle.fluid as fluid import paddle.fluid as fluid
from functools import partial from functools import partial
...@@ -115,6 +116,7 @@ import numpy as np ...@@ -115,6 +116,7 @@ import numpy as np
CLASS_DIM = 2 CLASS_DIM = 2
EMB_DIM = 128 EMB_DIM = 128
HID_DIM = 512 HID_DIM = 512
STACKED_NUM = 3
BATCH_SIZE = 128 BATCH_SIZE = 128
USE_GPU = False USE_GPU = False
``` ```
...@@ -126,23 +128,23 @@ USE_GPU = False ...@@ -126,23 +128,23 @@ USE_GPU = False
```python ```python
def convolution_net(data, input_dim, class_dim, emb_dim, hid_dim): def convolution_net(data, input_dim, class_dim, emb_dim, hid_dim):
emb = fluid.layers.embedding( emb = fluid.layers.embedding(
input=data, size=[input_dim, emb_dim], is_sparse=True) input=data, size=[input_dim, emb_dim], is_sparse=True)
conv_3 = fluid.nets.sequence_conv_pool( conv_3 = fluid.nets.sequence_conv_pool(
input=emb, input=emb,
num_filters=hid_dim, num_filters=hid_dim,
filter_size=3, filter_size=3,
act="tanh", act="tanh",
pool_type="sqrt") pool_type="sqrt")
conv_4 = fluid.nets.sequence_conv_pool( conv_4 = fluid.nets.sequence_conv_pool(
input=emb, input=emb,
num_filters=hid_dim, num_filters=hid_dim,
filter_size=4, filter_size=4,
act="tanh", act="tanh",
pool_type="sqrt") pool_type="sqrt")
prediction = fluid.layers.fc( prediction = fluid.layers.fc(
input=[conv_3, conv_4], size=class_dim, act="softmax") input=[conv_3, conv_4], size=class_dim, act="softmax")
return prediction return prediction
``` ```
网络的输入`input_dim`表示的是词典的大小,`class_dim`表示类别数。这里,我们使用[`sequence_conv_pool`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer_config_helpers/networks.py) API实现了卷积和池化操作。 网络的输入`input_dim`表示的是词典的大小,`class_dim`表示类别数。这里,我们使用[`sequence_conv_pool`](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer_config_helpers/networks.py) API实现了卷积和池化操作。
...@@ -154,27 +156,26 @@ return prediction ...@@ -154,27 +156,26 @@ return prediction
```python ```python
def stacked_lstm_net(data, input_dim, class_dim, emb_dim, hid_dim, stacked_num): def stacked_lstm_net(data, input_dim, class_dim, emb_dim, hid_dim, stacked_num):
emb = fluid.layers.embedding( emb = fluid.layers.embedding(
input=data, size=[input_dim, emb_dim], is_sparse=True) input=data, size=[input_dim, emb_dim], is_sparse=True)
fc1 = fluid.layers.fc(input=emb, size=hid_dim) fc1 = fluid.layers.fc(input=emb, size=hid_dim)
lstm1, cell1 = fluid.layers.dynamic_lstm(input=fc1, size=hid_dim) lstm1, cell1 = fluid.layers.dynamic_lstm(input=fc1, size=hid_dim)
inputs = [fc1, lstm1] inputs = [fc1, lstm1]
for i in range(2, stacked_num + 1): for i in range(2, stacked_num + 1):
fc = fluid.layers.fc(input=inputs, size=hid_dim) fc = fluid.layers.fc(input=inputs, size=hid_dim)
lstm, cell = fluid.layers.dynamic_lstm( lstm, cell = fluid.layers.dynamic_lstm(
input=fc, size=hid_dim, is_reverse=(i % 2) == 0) input=fc, size=hid_dim, is_reverse=(i % 2) == 0)
inputs = [fc, lstm] inputs = [fc, lstm]
fc_last = fluid.layers.sequence_pool(input=inputs[0], pool_type='max') fc_last = fluid.layers.sequence_pool(input=inputs[0], pool_type='max')
lstm_last = fluid.layers.sequence_pool(input=inputs[1], pool_type='max') lstm_last = fluid.layers.sequence_pool(input=inputs[1], pool_type='max')
prediction = fluid.layers.fc(input=[fc_last, lstm_last], prediction = fluid.layers.fc(
size=class_dim, input=[fc_last, lstm_last], size=class_dim, act='softmax')
act='softmax') return prediction
return prediction
``` ```
以上的栈式双向LSTM抽象出了高级特征并把其映射到和分类类别数同样大小的向量上。`paddle.activation.Softmax`函数用来计算分类属于某个类别的概率。 以上的栈式双向LSTM抽象出了高级特征并把其映射到和分类类别数同样大小的向量上。`paddle.activation.Softmax`函数用来计算分类属于某个类别的概率。
...@@ -184,12 +185,13 @@ return prediction ...@@ -184,12 +185,13 @@ return prediction
```python ```python
def inference_program(word_dict): def inference_program(word_dict):
data = fluid.layers.data( data = fluid.layers.data(
name="words", shape=[1], dtype="int64", lod_level=1) name="words", shape=[1], dtype="int64", lod_level=1)
dict_dim = len(word_dict) dict_dim = len(word_dict)
net = convolution_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM) net = convolution_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM)
return net # net = stacked_lstm_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM, STACKED_NUM)
return net
``` ```
我们这里定义了`training_program`。它使用了从`inference_program`返回的结果来计算误差。我们同时定义了优化函数`optimizer_func` 我们这里定义了`training_program`。它使用了从`inference_program`返回的结果来计算误差。我们同时定义了优化函数`optimizer_func`
...@@ -200,16 +202,16 @@ return net ...@@ -200,16 +202,16 @@ return net
```python ```python
def train_program(word_dict): def train_program(word_dict):
prediction = inference_program(word_dict) prediction = inference_program(word_dict)
label = fluid.layers.data(name="label", shape=[1], dtype="int64") label = fluid.layers.data(name="label", shape=[1], dtype="int64")
cost = fluid.layers.cross_entropy(input=prediction, label=label) cost = fluid.layers.cross_entropy(input=prediction, label=label)
avg_cost = fluid.layers.mean(cost) avg_cost = fluid.layers.mean(cost)
accuracy = fluid.layers.accuracy(input=prediction, label=label) accuracy = fluid.layers.accuracy(input=prediction, label=label)
return [avg_cost, accuracy] return [avg_cost, accuracy]
def optimizer_func(): def optimizer_func():
return fluid.optimizer.Adagrad(learning_rate=0.002) return fluid.optimizer.Adagrad(learning_rate=0.002)
``` ```
## 训练模型 ## 训练模型
...@@ -236,9 +238,9 @@ word_dict = paddle.dataset.imdb.word_dict() ...@@ -236,9 +238,9 @@ word_dict = paddle.dataset.imdb.word_dict()
print ("Reading training data....") print ("Reading training data....")
train_reader = paddle.batch( train_reader = paddle.batch(
paddle.reader.shuffle( paddle.reader.shuffle(
paddle.dataset.imdb.train(word_dict), buf_size=25000), paddle.dataset.imdb.train(word_dict), buf_size=25000),
batch_size=BATCH_SIZE) batch_size=BATCH_SIZE)
``` ```
### 构造训练器(trainer) ### 构造训练器(trainer)
...@@ -246,9 +248,9 @@ batch_size=BATCH_SIZE) ...@@ -246,9 +248,9 @@ batch_size=BATCH_SIZE)
```python ```python
trainer = fluid.Trainer( trainer = fluid.Trainer(
train_func=partial(train_program, word_dict), train_func=partial(train_program, word_dict),
place=place, place=place,
optimizer_func=optimizer_func) optimizer_func=optimizer_func)
``` ```
### 提供数据 ### 提供数据
...@@ -268,13 +270,13 @@ feed_order = ['words', 'label'] ...@@ -268,13 +270,13 @@ feed_order = ['words', 'label']
params_dirname = "understand_sentiment_conv.inference.model" params_dirname = "understand_sentiment_conv.inference.model"
def event_handler(event): def event_handler(event):
if isinstance(event, fluid.EndStepEvent): if isinstance(event, fluid.EndStepEvent):
print("Step {0}, Epoch {1} Metrics {2}".format( print("Step {0}, Epoch {1} Metrics {2}".format(
event.step, event.epoch, map(np.array, event.metrics))) event.step, event.epoch, map(np.array, event.metrics)))
if event.step == 10: if event.step == 10:
trainer.save_params(params_dirname) trainer.save_params(params_dirname)
trainer.stop() trainer.stop()
``` ```
### 开始训练 ### 开始训练
...@@ -283,10 +285,10 @@ trainer.stop() ...@@ -283,10 +285,10 @@ trainer.stop()
```python ```python
trainer.train( trainer.train(
num_epochs=1, num_epochs=1,
event_handler=event_handler, event_handler=event_handler,
reader=train_reader, reader=train_reader,
feed_order=feed_order) feed_order=feed_order)
``` ```
## 应用模型 ## 应用模型
...@@ -297,7 +299,7 @@ feed_order=feed_order) ...@@ -297,7 +299,7 @@ feed_order=feed_order)
```python ```python
inferencer = fluid.Inferencer( inferencer = fluid.Inferencer(
inference_program, param_path=params_dirname, place=place) infer_func=partial(inference_program, word_dict), param_path=params_dirname, place=place)
``` ```
### 生成测试用输入数据 ### 生成测试用输入数据
...@@ -307,14 +309,14 @@ inference_program, param_path=params_dirname, place=place) ...@@ -307,14 +309,14 @@ inference_program, param_path=params_dirname, place=place)
```python ```python
reviews_str = [ reviews_str = [
'read the book forget the movie', 'this is a great movie', 'this is very bad' 'read the book forget the movie', 'this is a great movie', 'this is very bad'
] ]
reviews = [c.split() for c in reviews_str] reviews = [c.split() for c in reviews_str]
UNK = word_dict['<unk>'] UNK = word_dict['<unk>']
lod = [] lod = []
for c in reviews: for c in reviews:
lod.append([word_dict.get(words, UNK) for words in c]) lod.append([word_dict.get(words, UNK) for words in c])
base_shape = [[len(c) for c in lod]] base_shape = [[len(c) for c in lod]]
...@@ -329,7 +331,7 @@ tensor_words = fluid.create_lod_tensor(lod, base_shape, place) ...@@ -329,7 +331,7 @@ tensor_words = fluid.create_lod_tensor(lod, base_shape, place)
results = inferencer.infer({'words': tensor_words}) results = inferencer.infer({'words': tensor_words})
for i, r in enumerate(results[0]): for i, r in enumerate(results[0]):
print("Predict probability of ", r[0], " to be positive and ", r[1], " to be negative for review \'", reviews_str[i], "\'") print("Predict probability of ", r[0], " to be positive and ", r[1], " to be negative for review \'", reviews_str[i], "\'")
``` ```
......
data/train.list
data/test.list
data/simple-examples*
...@@ -57,7 +57,28 @@ paddlepaddle-gpu==0.11.0 使用CUDA 7.5和cuDNN 5编译的0.11.0版 ...@@ -57,7 +57,28 @@ paddlepaddle-gpu==0.11.0 使用CUDA 7.5和cuDNN 5编译的0.11.0版
您可以在 `Release History <https://pypi.org/project/paddlepaddle-gpu/#history>`_ 您可以在 `Release History <https://pypi.org/project/paddlepaddle-gpu/#history>`_
中找到paddlepaddle-gpu的各个发行版本。 中找到paddlepaddle-gpu的各个发行版本。
如果需要获取并安装最新的PaddlePaddle开发分支,可以从我们的 `CI系统 <https://paddleci.ngrok.io/project.html?projectId=Manylinux1&tab=projectOverview>`_ 中下载最新的whl安装包和c-api开发包并安装。如需登录,请点击“Log in as guest”。 如果需要获取并安装最新的(开发分支)PaddlePaddle,可以从我们的CI系统中下载最新的whl
安装包和c-api开发包并安装,您可以从下面的表格中找到需要的版本:
如果在点击下面链接时出现如下登陆界面,点击“Log in as guest”即可开始下载:
.. image:: paddleci.png
:scale: 50 %
:align: center
.. csv-table:: 各个版本最新的whl包
:header: "版本说明", "cp27-cp27mu", "cp27-cp27m"
:widths: 1, 3, 3
"stable_cuda9.0_cudnn7", "`paddlepaddle_gpu-0.14.0-cp27-cp27mu-manylinux1_x86_64.whl <https://files.pythonhosted.org/packages/ee/ee/5d96e99d4a6d57bd1a7a8c4c98124a5ba0f6f0e07f38f4cee1365e0d9734/paddlepaddle_gpu-0.14.0-cp27-cp27mu-manylinux1_x86_64.whl>`__", "`paddlepaddle_gpu-0.14.0-cp27-cp27m-manylinux1_x86_64.whl <https://files.pythonhosted.org/packages/2e/65/3c1e44417dfc4afc7004f4db06789876b1237a0b6b234e0bd4213f3258b7/paddlepaddle_gpu-0.14.0-cp27-cp27m-manylinux1_x86_64.whl>`__"
"stable_cuda8.0_cudnn7", "`paddlepaddle_gpu-0.14.0.post87-cp27-cp27mu-manylinux1_x86_64.whl <https://files.pythonhosted.org/packages/a1/eb/261d920ede38d4b2b8dfb5817d7f7d25c526b1a70260f23312ad6029c0d3/paddlepaddle_gpu-0.14.0.post87-cp27-cp27mu-manylinux1_x86_64.whl>`__", "`paddlepaddle_gpu-0.14.0.post87-cp27-cp27m-manylinux1_x86_64.whl <https://files.pythonhosted.org/packages/54/1d/2c2a5c8665634b47fa925839108752611202a7c08ba4d65c2ee79f825a0e/paddlepaddle_gpu-0.14.0.post87-cp27-cp27m-manylinux1_x86_64.whl>`__"
"stable_cuda8.0_cudnn5", "`paddlepaddle_gpu-0.14.0.post85-cp27-cp27mu-manylinux1_x86_64.whl <https://files.pythonhosted.org/packages/60/50/94d16d34976f06b3cd8818d9b7bf40a9ff16bc48120ac9254d976f8ffc35/paddlepaddle_gpu-0.14.0.post85-cp27-cp27mu-manylinux1_x86_64.whl>`__", "`paddlepaddle_gpu-0.14.0.post85-cp27-cp27m-manylinux1_x86_64.whl <https://files.pythonhosted.org/packages/24/dd/25c1db09524f654c80baa83e7aafdd67109449bd5b500964f4005047dcf8/paddlepaddle_gpu-0.14.0.post85-cp27-cp27m-manylinux1_x86_64.whl>`__"
"cpu_avx_mkl", "`paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_CpuAvxCp27cp27mu/845:id/paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl>`__", "`paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_CpuAvxCp27cp27mu/845:id/paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl>`__"
"cpu_avx_openblas", "`paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_CpuAvxOpenblas/846:id/paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl>`__", "`paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_CpuAvxOpenblas/846:id/paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl>`__"
"cpu_noavx_openblas", "`paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_CpuNoavxOpenblas/847:id/paddlepaddle-latest-cp27-cp27mu-linux_x86_64.whl>`__", "`paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_CpuNoavxOpenblas/847:id/paddlepaddle-latest-cp27-cp27m-linux_x86_64.whl>`_"
"cuda8.0_cudnn5_avx_mkl", "`paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_Cuda80cudnn5cp27cp27mu/841:id/paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl>`__", "`paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_Cuda80cudnn5cp27cp27mu/841:id/paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl>`__"
"cuda8.0_cudnn7_avx_mkl", "`paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_Cuda8cudnn7cp27cp27mu/843:id/paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl>`__", "`paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_Cuda8cudnn7cp27cp27mu/843:id/paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl>`__"
"cuda9.0_cudnn7_avx_mkl", "`paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_Cuda90cudnn7avxMkl/842:id/paddlepaddle_gpu-latest-cp27-cp27mu-linux_x86_64.whl>`__", "`paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl <https://paddleci.ngrok.io/repository/download/Manylinux1_Cuda90cudnn7avxMkl/842:id/paddlepaddle_gpu-latest-cp27-cp27m-linux_x86_64.whl>`__"
.. _FAQ: .. _FAQ:
......
...@@ -36,6 +36,7 @@ paddle.fluid.default_startup_program ArgSpec(args=[], varargs=None, keywords=Non ...@@ -36,6 +36,7 @@ paddle.fluid.default_startup_program ArgSpec(args=[], varargs=None, keywords=Non
paddle.fluid.default_main_program ArgSpec(args=[], varargs=None, keywords=None, defaults=None) paddle.fluid.default_main_program ArgSpec(args=[], varargs=None, keywords=None, defaults=None)
paddle.fluid.program_guard ArgSpec(args=[], varargs='args', keywords='kwds', defaults=None) paddle.fluid.program_guard ArgSpec(args=[], varargs='args', keywords='kwds', defaults=None)
paddle.fluid.get_var ArgSpec(args=['name', 'program'], varargs=None, keywords=None, defaults=(None,)) paddle.fluid.get_var ArgSpec(args=['name', 'program'], varargs=None, keywords=None, defaults=(None,))
paddle.fluid.name_scope ArgSpec(args=[], varargs='args', keywords='kwds', defaults=None)
paddle.fluid.Executor.__init__ ArgSpec(args=['self', 'place'], varargs=None, keywords=None, defaults=None) paddle.fluid.Executor.__init__ ArgSpec(args=['self', 'place'], varargs=None, keywords=None, defaults=None)
paddle.fluid.Executor.close ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None) paddle.fluid.Executor.close ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
paddle.fluid.Executor.run ArgSpec(args=['self', 'program', 'feed', 'fetch_list', 'feed_var_name', 'fetch_var_name', 'scope', 'return_numpy', 'use_program_cache'], varargs=None, keywords=None, defaults=(None, None, None, 'feed', 'fetch', None, True, False)) paddle.fluid.Executor.run ArgSpec(args=['self', 'program', 'feed', 'fetch_list', 'feed_var_name', 'fetch_var_name', 'scope', 'return_numpy', 'use_program_cache'], varargs=None, keywords=None, defaults=(None, None, None, 'feed', 'fetch', None, True, False))
......
...@@ -625,19 +625,11 @@ int MultiDevSSAGraphBuilder::GetVarDeviceID(const ir::Graph &graph, ...@@ -625,19 +625,11 @@ int MultiDevSSAGraphBuilder::GetVarDeviceID(const ir::Graph &graph,
void MultiDevSSAGraphBuilder::CreateScaleLossGradOp( void MultiDevSSAGraphBuilder::CreateScaleLossGradOp(
ir::Graph *result, const std::string &loss_grad_name) const { ir::Graph *result, const std::string &loss_grad_name) const {
for (size_t i = 0; i < places_.size(); ++i) { for (size_t i = 0; i < places_.size(); ++i) {
// Insert ScaleCost OpHandle // Insert ScaleCost OpHandle
#ifdef PADDLE_WITH_CUDA auto *dev_ctx = platform::DeviceContextPool::Instance().Get(places_[i]);
auto *communication_dev_ctx =
nccl_ctxs_ ? nccl_ctxs_->DevCtx(places_[i])
: platform::DeviceContextPool::Instance().Get(places_[i]);
#else
auto *communication_dev_ctx =
platform::DeviceContextPool::Instance().Get(platform::CPUPlace());
#endif
auto *op_handle = new ScaleLossGradOpHandle( auto *op_handle = new ScaleLossGradOpHandle(
result->CreateEmptyNode("scale_loss_grad", ir::Node::Type::kOperation), result->CreateEmptyNode("scale_loss_grad", ir::Node::Type::kOperation),
local_scopes_.size(), local_scopes_[i], places_[i], local_scopes_.size(), local_scopes_[i], places_[i], dev_ctx);
communication_dev_ctx);
result->Get<GraphOps>(kGraphOps).emplace_back(op_handle); result->Get<GraphOps>(kGraphOps).emplace_back(op_handle);
// FIXME: Currently ScaleLossGradOp only use device_count as scale // FIXME: Currently ScaleLossGradOp only use device_count as scale
...@@ -744,7 +736,7 @@ void MultiDevSSAGraphBuilder::CreateDistTrainOp(ir::Graph *result, ...@@ -744,7 +736,7 @@ void MultiDevSSAGraphBuilder::CreateDistTrainOp(ir::Graph *result,
.emplace(varname, op_dev_id); .emplace(varname, op_dev_id);
} }
} else { } else {
PADDLE_ENFORCE( PADDLE_THROW(
"the distribute training related op should be in [split_byref, " "the distribute training related op should be in [split_byref, "
"concat]."); "concat].");
} }
......
...@@ -60,6 +60,7 @@ class Executor { ...@@ -60,6 +60,7 @@ class Executor {
void Run(const ProgramDesc& prog, Scope* scope, int block_id, void Run(const ProgramDesc& prog, Scope* scope, int block_id,
bool create_local_scope = true, bool create_vars = true); bool create_local_scope = true, bool create_vars = true);
// This API is very slow.
void Run(const ProgramDesc& program, Scope* scope, void Run(const ProgramDesc& program, Scope* scope,
std::map<std::string, const LoDTensor*>* feed_targets, std::map<std::string, const LoDTensor*>* feed_targets,
std::map<std::string, LoDTensor*>* fetch_targets, std::map<std::string, LoDTensor*>* fetch_targets,
...@@ -79,6 +80,7 @@ class Executor { ...@@ -79,6 +80,7 @@ class Executor {
bool create_local_scope = true, bool create_local_scope = true,
bool create_vars = true, bool keep_kids = false); bool create_vars = true, bool keep_kids = false);
// This API is very slow.
void RunPreparedContext(ExecutorPrepareContext* ctx, Scope* scope, void RunPreparedContext(ExecutorPrepareContext* ctx, Scope* scope,
std::map<std::string, const LoDTensor*>* feed_targets, std::map<std::string, const LoDTensor*>* feed_targets,
std::map<std::string, LoDTensor*>* fetch_targets, std::map<std::string, LoDTensor*>* fetch_targets,
......
...@@ -59,7 +59,7 @@ void FindWhileOp(Graph* graph) { ...@@ -59,7 +59,7 @@ void FindWhileOp(Graph* graph) {
auto handle = [&](const GraphPatternDetector::subgraph_t& subgraph, auto handle = [&](const GraphPatternDetector::subgraph_t& subgraph,
Graph* g) { Graph* g) {
auto* while_pat_node = gpd.pattern().RetriveNode("while"); auto* while_pat_node = gpd.pattern().RetrieveNode("while");
auto* while_node = subgraph.at(while_pat_node); auto* while_node = subgraph.at(while_pat_node);
marked_nodes.insert(while_node); marked_nodes.insert(while_node);
}; };
......
...@@ -31,77 +31,34 @@ bool VarOutLinksToOp(Node* node, const std::string& op_type) { ...@@ -31,77 +31,34 @@ bool VarOutLinksToOp(Node* node, const std::string& op_type) {
} }
void BuildFCPattern(PDPattern* pattern) { void BuildFCPattern(PDPattern* pattern) {
// make sure the selected MUL op has one input argument is a parameter. // Create Operators
auto* mul_parameter_var = pattern->NewNode( auto* mul_op = pattern->NewNode("mul")->assert_is_op("mul");
[](Node* node) { auto* elementwise_add_op =
return node->IsVar() && node->outputs.size() == 1UL && pattern->NewNode("elementwise_add")->assert_is_op("elementwise_add");
node->outputs.front()->Op()->Type() == "mul" && node->Var() && // Create variables
node->Var()->Persistable(); // check is a parameter // w
}, auto* mul_weight_var = pattern->NewNode("mul_weight")
"mul_weight" /*name*/); ->AsInput()
->assert_is_op_nth_input("mul", "Y", 0);
auto* mul_tmp_input_var = pattern->NewNode( // x
[](Node* node) { auto* mul_tmp_var = pattern->NewNode("mul_tmp_var")
bool result = ->AsInput()
node->IsVar() && node->outputs.size() >= 1UL && node->Var() && ->assert_is_op_nth_input("mul", "X", 0);
!node->Var()->Persistable(); // this input is not an parameter. // intermediate variable, will be removed in the IR after fuse.
if (!result) return false; auto* mul_out_var = pattern->NewNode("mul_out")
// check whether one output is MUL op. ->AsIntermediate()
for (auto* op : node->outputs) { ->assert_is_only_output_of_op("mul")
if (op->IsOp() && op->Op()->Type() == "mul") return true; ->assert_is_op_input("elementwise_add");
} // bias
return false; auto* elementwise_add_tmp_var = pattern->NewNode("elementwise_add_tmpvar")
}, ->assert_is_op_input("elementwise_add")
"mul_tmp_var" /*name*/); ->AsInput();
// output
// select a MUL op auto* elementwise_add_out_var = pattern->NewNode("elementwise_add_out")
auto* mul_op = pattern->NewNode( ->AsOutput()
[](Node* node) { ->assert_is_op_output("elementwise_add");
return node->IsOp() && // start from an Op
node->Op()->Type() == "mul"; // type is mul mul_op->LinksFrom({mul_weight_var, mul_tmp_var}).LinksTo({mul_out_var});
// the output should be consumed only by one element_add, that check
// leaves in a Var PDNode.
},
"mul" /*name*/);
// make sure the MUL op's output has only one consumer and links to an
// ELEMENTWISE_ADD op.
auto* mul_out_var = pattern->NewNode(
[](Node* node) {
return node->IsVar() && // starts from a Var
node->outputs.size() == 1UL && // only has one consumer
node->outputs.front()->IsOp() && // check basic logic
node->Var() && // not a ControlDepVar
node->outputs.front()->Op()->Type() ==
"elementwise_add"; // a very strong validation
},
"mul_out");
// this check is not essential, just to make the corresponding variable Node
// retrival easier.
auto* elementwise_add_tmp_var = pattern->NewNode(
[](Node* node) {
return node->IsVar() && node->outputs.size() >= 1UL && node->Var() &&
VarOutLinksToOp(node, "elementwise_add");
},
"elementwise_add_tmpvar");
// select an ELEMENTWISE_ADD op
auto* elementwise_add_op = pattern->NewNode(
[](Node* node) {
return node->IsOp() && node->Op()->Type() == "elementwise_add";
},
"elementwise_add" /*name*/);
// get the ELEMENTWISE_ADD op's output
auto* elementwise_add_out_var = pattern->NewNode(
[](Node* node) {
return node->IsVar() && node->inputs.size() == 1UL && node->Var() &&
node->inputs.front()->Op()->Type() == "elementwise_add";
},
"elementwise_add_out");
mul_op->LinksFrom({mul_parameter_var, mul_tmp_input_var})
.LinksTo({mul_out_var});
elementwise_add_op->LinksFrom({mul_out_var, elementwise_add_tmp_var}) elementwise_add_op->LinksFrom({mul_out_var, elementwise_add_tmp_var})
.LinksTo({elementwise_add_out_var}); .LinksTo({elementwise_add_out_var});
} }
...@@ -120,6 +77,7 @@ bool LinksReplace(std::vector<Node*>* links, Node* from, Node* to) { ...@@ -120,6 +77,7 @@ bool LinksReplace(std::vector<Node*>* links, Node* from, Node* to) {
std::unique_ptr<ir::Graph> FCFusePass::ApplyImpl( std::unique_ptr<ir::Graph> FCFusePass::ApplyImpl(
std::unique_ptr<ir::Graph> graph) const { std::unique_ptr<ir::Graph> graph) const {
PADDLE_ENFORCE(graph.get()); PADDLE_ENFORCE(graph.get());
FusePassBase::Init("fc", graph.get());
std::unordered_set<Node*> nodes2delete; std::unordered_set<Node*> nodes2delete;
...@@ -127,11 +85,12 @@ std::unique_ptr<ir::Graph> FCFusePass::ApplyImpl( ...@@ -127,11 +85,12 @@ std::unique_ptr<ir::Graph> FCFusePass::ApplyImpl(
BuildFCPattern(gpd.mutable_pattern()); BuildFCPattern(gpd.mutable_pattern());
#define GET_NODE(id) \ #define GET_NODE(id) \
PADDLE_ENFORCE(subgraph.count(gpd.pattern().RetriveNode(#id)), \ PADDLE_ENFORCE(subgraph.count(gpd.pattern().RetrieveNode(#id)), \
"pattern has no Node called %s", #id); \ "pattern has no Node called %s", #id); \
auto* id = subgraph.at(gpd.pattern().RetriveNode(#id)); \ auto* id = subgraph.at(gpd.pattern().RetrieveNode(#id)); \
PADDLE_ENFORCE_NOT_NULL(id, "subgraph has no node %s", #id); PADDLE_ENFORCE_NOT_NULL(id, "subgraph has no node %s", #id);
int found_fc_count = 0;
auto handler = [&](const GraphPatternDetector::subgraph_t& subgraph, auto handler = [&](const GraphPatternDetector::subgraph_t& subgraph,
Graph* g) { Graph* g) {
VLOG(4) << "handle FC fuse"; VLOG(4) << "handle FC fuse";
...@@ -176,10 +135,13 @@ std::unique_ptr<ir::Graph> FCFusePass::ApplyImpl( ...@@ -176,10 +135,13 @@ std::unique_ptr<ir::Graph> FCFusePass::ApplyImpl(
graph->RemoveNode(mul); graph->RemoveNode(mul);
graph->RemoveNode(elementwise_add); graph->RemoveNode(elementwise_add);
graph->RemoveNode(mul_out); // tmp variable graph->RemoveNode(mul_out); // tmp variable
found_fc_count++;
}; };
gpd(graph.get(), handler); gpd(graph.get(), handler);
AddStatis(found_fc_count);
return graph; return graph;
} }
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
#include "paddle/fluid/framework/ir/fuse_pass_base.h"
#include "paddle/fluid/framework/ir/graph.h" #include "paddle/fluid/framework/ir/graph.h"
#include "paddle/fluid/framework/ir/graph_pattern_detector.h" #include "paddle/fluid/framework/ir/graph_pattern_detector.h"
#include "paddle/fluid/framework/ir/pass.h" #include "paddle/fluid/framework/ir/pass.h"
...@@ -23,7 +24,7 @@ namespace ir { ...@@ -23,7 +24,7 @@ namespace ir {
/* /*
* Fuse the MUL and ELEMENTWISE_ADD to a FCOp. * Fuse the MUL and ELEMENTWISE_ADD to a FCOp.
*/ */
class FCFusePass : public Pass { class FCFusePass : public FusePassBase {
public: public:
virtual ~FCFusePass() {} virtual ~FCFusePass() {}
......
...@@ -25,8 +25,13 @@ void SetOp(ProgramDesc* prog, const std::string& type, ...@@ -25,8 +25,13 @@ void SetOp(ProgramDesc* prog, const std::string& type,
const std::vector<std::string>& outputs) { const std::vector<std::string>& outputs) {
auto* op = prog->MutableBlock(0)->AppendOp(); auto* op = prog->MutableBlock(0)->AppendOp();
op->SetType(type); op->SetType(type);
op->SetInput("Xs", inputs); if (type == "mul") {
op->SetOutput("Ys", outputs); op->SetInput("X", {inputs[0]});
op->SetInput("Y", {inputs[1]});
} else if (type == "elementwise_add") {
op->SetInput("X", inputs);
}
op->SetOutput("Out", outputs);
} }
// a->OP0->b // a->OP0->b
......
...@@ -36,7 +36,7 @@ std::unique_ptr<ir::Graph> FCLstmFusePass::ApplyImpl( ...@@ -36,7 +36,7 @@ std::unique_ptr<ir::Graph> FCLstmFusePass::ApplyImpl(
auto handler = [&](const GraphPatternDetector::subgraph_t& subgraph, auto handler = [&](const GraphPatternDetector::subgraph_t& subgraph,
Graph* g) { Graph* g) {
auto* id = subgraph.at(gpd.pattern().RetriveNode("any_node")); auto* id = subgraph.at(gpd.pattern().RetrieveNode("any_node"));
marked_nodes.insert(id); marked_nodes.insert(id);
}; };
gpd(graph.get(), handler); gpd(graph.get(), handler);
...@@ -64,9 +64,9 @@ std::unique_ptr<ir::Graph> FCLstmFusePass::ApplyImpl( ...@@ -64,9 +64,9 @@ std::unique_ptr<ir::Graph> FCLstmFusePass::ApplyImpl(
#undef GET_NODE #undef GET_NODE
#undef SET_IN #undef SET_IN
LOG(INFO) << "hidden_n: " << hidden_n->Name(); VLOG(4) << "hidden_n: " << hidden_n->Name();
LOG(INFO) << "cell: " << cell_n->Name(); VLOG(4) << "cell: " << cell_n->Name();
LOG(INFO) << "xx: " << xx_n->Name(); VLOG(4) << "xx: " << xx_n->Name();
op_desc.SetInput("H0", {}); op_desc.SetInput("H0", {});
op_desc.SetInput("C0", {}); op_desc.SetInput("C0", {});
......
...@@ -22,21 +22,37 @@ namespace paddle { ...@@ -22,21 +22,37 @@ namespace paddle {
namespace framework { namespace framework {
namespace ir { namespace ir {
static const char kParamScopeAttr[] = "param_scope"; static const char kParamScopeAttr[] = "__param_scope__";
static const char kFuseStatisAttr[] = "__fuse_statis__";
class FusePassBase : public Pass { class FusePassBase : public Pass {
public: public:
void Init(Graph* graph) const { graph_ = graph; } void Init(const std::string& repr, Graph* graph) const {
repr_ = repr;
graph_ = graph;
}
Scope* param_scope() const { Scope* param_scope() const {
PADDLE_ENFORCE(graph_->Has(kParamScopeAttr)); PADDLE_ENFORCE(graph_->Has(kParamScopeAttr));
return graph_->Get<framework::Scope*>(kParamScopeAttr); return graph_->Get<framework::Scope*>(kParamScopeAttr);
} }
void AddStatis(int count_of_fused) const {
PADDLE_ENFORCE(graph_);
PADDLE_ENFORCE(!repr_.empty());
if (!graph_->Has(kFuseStatisAttr)) {
graph_->Set(kFuseStatisAttr, new std::unordered_map<std::string, int>);
}
auto& info =
graph_->Get<std::unordered_map<std::string, int>>(kFuseStatisAttr);
info[repr_] = count_of_fused;
}
virtual ~FusePassBase() {} virtual ~FusePassBase() {}
protected: protected:
mutable Graph* graph_; mutable Graph* graph_;
mutable std::string repr_;
}; };
} // namespace ir } // namespace ir
......
...@@ -27,6 +27,19 @@ namespace ir { ...@@ -27,6 +27,19 @@ namespace ir {
size_t PDPattern::id_ = 0UL; size_t PDPattern::id_ = 0UL;
PDNode* PDPattern::NewNode(const std::string& name) {
if (!name.empty()) {
PADDLE_ENFORCE_EQ(node_map_.count(name), 0,
"PDNode's name should be unique, get duplicate [%s]",
name);
}
nodes_.emplace_back(new PDNode(this, name));
auto* cur = nodes_.back().get();
node_map_[name] = cur;
return cur;
}
PDNode* PDPattern::NewNode(PDNode::teller_t&& teller, const std::string& name) { PDNode* PDPattern::NewNode(PDNode::teller_t&& teller, const std::string& name) {
if (!name.empty()) { if (!name.empty()) {
PADDLE_ENFORCE_EQ(node_map_.count(name), 0, PADDLE_ENFORCE_EQ(node_map_.count(name), 0,
...@@ -40,7 +53,7 @@ PDNode* PDPattern::NewNode(PDNode::teller_t&& teller, const std::string& name) { ...@@ -40,7 +53,7 @@ PDNode* PDPattern::NewNode(PDNode::teller_t&& teller, const std::string& name) {
return cur; return cur;
} }
PDNode* PDPattern::RetriveNode(const std::string& id) const { PDNode* PDPattern::RetrieveNode(const std::string& id) const {
auto it = node_map_.find(id); auto it = node_map_.find(id);
if (it == node_map_.end()) { if (it == node_map_.end()) {
return nullptr; return nullptr;
...@@ -62,7 +75,9 @@ void GraphPatternDetector::operator()(Graph* graph, ...@@ -62,7 +75,9 @@ void GraphPatternDetector::operator()(Graph* graph,
auto subgraphs = DetectPatterns(); auto subgraphs = DetectPatterns();
UniquePatterns(&subgraphs); UniquePatterns(&subgraphs);
RemoveOverlappedMatch(&subgraphs); RemoveOverlappedMatch(&subgraphs);
ValidateByNodeRole(&subgraphs);
if (subgraphs.empty()) return;
LOG(INFO) << "detect " << subgraphs.size() << " subgraph matches the pattern"; LOG(INFO) << "detect " << subgraphs.size() << " subgraph matches the pattern";
int id = 0; int id = 0;
for (auto& g : subgraphs) { for (auto& g : subgraphs) {
...@@ -83,10 +98,54 @@ bool GraphPatternDetector::MarkPDNodesInGraph(const ir::Graph& graph) { ...@@ -83,10 +98,54 @@ bool GraphPatternDetector::MarkPDNodesInGraph(const ir::Graph& graph) {
} }
} }
} }
// Check to early stop if some PDNode can't find matched Node.
for (auto& pdnode : pattern_.nodes()) {
if (!pdnodes2nodes_.count(pdnode.get())) {
VLOG(4) << pdnode->name() << " can't find matched Node, early stop";
return false;
}
}
VLOG(3) << pdnodes2nodes_.size() << " nodes marked"; VLOG(3) << pdnodes2nodes_.size() << " nodes marked";
return !pdnodes2nodes_.empty(); return !pdnodes2nodes_.empty();
} }
// The intermediate Nodes can only link to the nodes inside the pattern, or this
// subgraph will be droped.
void GraphPatternDetector::ValidateByNodeRole(
std::vector<GraphPatternDetector::subgraph_t>* subgraphs) {
std::vector<GraphPatternDetector::subgraph_t> result;
subgraphs->erase(
std::remove_if(
subgraphs->begin(), subgraphs->end(),
[](const GraphPatternDetector::subgraph_t& subgraph) -> bool {
// Collect the inputs and outputs.
std::unordered_set<Node*> ios;
for (auto& item : subgraph) {
if (!item.first->IsIntermediate()) {
ios.insert(item.second);
}
}
for (auto& item : subgraph) {
if (item.first->IsIntermediate()) {
for (auto* x : item.second->inputs) {
if (!ios.count(x)) {
return true;
}
}
for (auto* x : item.second->outputs) {
if (!ios.count(x)) {
return true;
}
}
}
}
return false;
}),
subgraphs->end());
}
struct HitGroup { struct HitGroup {
std::unordered_map<PDNode*, Node*> roles; std::unordered_map<PDNode*, Node*> roles;
...@@ -140,6 +199,7 @@ GraphPatternDetector::DetectPatterns() { ...@@ -140,6 +199,7 @@ GraphPatternDetector::DetectPatterns() {
// in edges of PDNodes. // in edges of PDNodes.
for (const auto& edge : pattern_.edges()) { for (const auto& edge : pattern_.edges()) {
VLOG(4) << "check " << edge.first->name() << " -> " << edge.second->name(); VLOG(4) << "check " << edge.first->name() << " -> " << edge.second->name();
// TODO(Superjomn) Fix bug here, the groups might be duplicate here.
// Each role has two PDNodes, which indicates two roles. // Each role has two PDNodes, which indicates two roles.
// Detect two Nodes that can match these two roles and they are connected. // Detect two Nodes that can match these two roles and they are connected.
auto& pre_groups = bi_records[step % 2]; auto& pre_groups = bi_records[step % 2];
...@@ -149,6 +209,7 @@ GraphPatternDetector::DetectPatterns() { ...@@ -149,6 +209,7 @@ GraphPatternDetector::DetectPatterns() {
// source -> target // source -> target
for (Node* source : pdnodes2nodes_[edge.first]) { for (Node* source : pdnodes2nodes_[edge.first]) {
for (Node* target : pdnodes2nodes_[edge.second]) { for (Node* target : pdnodes2nodes_[edge.second]) {
VLOG(8) << "check " << source->id() << " -- " << target->id();
// TODO(Superjomn) add some prune strategies. // TODO(Superjomn) add some prune strategies.
for (const auto& group : pre_groups) { for (const auto& group : pre_groups) {
HitGroup new_group = group; HitGroup new_group = group;
...@@ -165,6 +226,12 @@ GraphPatternDetector::DetectPatterns() { ...@@ -165,6 +226,12 @@ GraphPatternDetector::DetectPatterns() {
} }
} }
VLOG(3) << "step " << step << " get records: " << cur_groups.size(); VLOG(3) << "step " << step << " get records: " << cur_groups.size();
for (auto& group : cur_groups) {
for (auto& item : group.roles) {
VLOG(4) << "node " << item.second->id() << " as " << item.first->name();
}
VLOG(4) << "=========================================================";
}
} }
for (auto& group : bi_records[step % 2]) { for (auto& group : bi_records[step % 2]) {
...@@ -260,6 +327,118 @@ PDNode& PDNode::LinksFrom(const std::vector<PDNode*>& others) { ...@@ -260,6 +327,118 @@ PDNode& PDNode::LinksFrom(const std::vector<PDNode*>& others) {
return *this; return *this;
} }
PDNode* PDNode::assert_is_op() {
asserts_.emplace_back([this](Node* x) { return x && x->IsOp(); });
return this;
}
PDNode* PDNode::assert_is_op(const std::string& op_type) {
asserts_.emplace_back([this, op_type](Node* x) {
return x && x->IsOp() && x->Op()->Type() == op_type;
});
return this;
}
PDNode* PDNode::assert_is_var() {
asserts_.emplace_back([this](Node* x) { return x && x->IsVar(); });
return this;
}
PDNode* PDNode::assert_var_not_persistable() {
assert_is_var();
asserts_.emplace_back([this](Node* x) { return !x->Var()->Persistable(); });
return this;
}
PDNode* PDNode::assert_is_persistable_var() {
assert_is_var();
asserts_.emplace_back([=](Node* x) { return x->Var()->Persistable(); });
return this;
}
PDNode* PDNode::assert_is_op_nth_input(const std::string& op_type,
const std::string& argument, int nth) {
assert_is_var();
assert_is_op_input(op_type);
asserts_.emplace_back([=](Node* x) {
for (auto* op : x->outputs) {
if (IsNthInput(x, op, argument, nth)) return true;
}
return false;
});
return this;
}
PDNode* PDNode::assert_is_op_nth_output(const std::string& op_type,
const std::string& argument, int nth) {
assert_is_var();
asserts_.emplace_back([=](Node* x) {
for (auto* op : x->inputs) {
if (IsNthOutput(x, op, argument, nth)) return true;
}
return false;
});
return this;
}
PDNode* PDNode::assert_is_only_input_of_op(const std::string& op_type) {
assert_is_var();
asserts_.emplace_back([=](Node* x) {
for (auto* op : x->outputs) {
if (op && op->IsOp() && op->Op() && op->Op()->Type() == op_type &&
op->inputs.size() == 1) {
return true;
}
}
return false;
});
return this;
}
PDNode* PDNode::assert_is_only_output_of_op(const std::string& op_type) {
assert_is_var();
asserts_.emplace_back([=](Node* x) {
for (auto* op : x->inputs) {
if (op && op->IsOp() && op->Op() && op->Op()->Type() == op_type &&
op->outputs.size() == 1) {
return true;
}
}
return false;
});
return this;
}
PDNode* PDNode::assert_is_op_output(const std::string& op_type) {
assert_is_var();
asserts_.emplace_back([=](Node* x) {
for (auto* op : x->inputs) {
if (op && op->IsOp() && op->Op() && op->Op()->Type() == op_type) {
return true;
}
}
return false;
});
return this;
}
PDNode* PDNode::assert_is_op_input(const std::string& op_type) {
assert_is_var();
asserts_.emplace_back([=](Node* x) {
for (auto* op : x->outputs) {
if (op && op->IsOp() && op->Op() && op->Op()->Type() == op_type) {
return true;
}
}
return false;
});
return this;
}
PDNode* PDNode::assert_op_has_n_inputs(const std::string& op_type, size_t n) {
assert_is_op(op_type);
asserts_.emplace_back([=](Node* x) { return x->inputs.size() == n; });
return this;
}
PDNode* PDNode::assert_op_has_n_outputs(const std::string& op_type, size_t n) {
assert_is_op(op_type);
asserts_.emplace_back([=](Node* x) { return x->outputs.size() == n; });
return this;
}
PDNode* PDNode::assert_more(PDNode::teller_t&& teller) {
asserts_.emplace_back(std::move(teller));
return this;
}
} // namespace ir } // namespace ir
} // namespace framework } // namespace framework
} // namespace paddle } // namespace paddle
...@@ -39,14 +39,24 @@ struct PDNode { ...@@ -39,14 +39,24 @@ struct PDNode {
// tell whether an ir::Node* is a candidation for a PDNode. // tell whether an ir::Node* is a candidation for a PDNode.
using teller_t = std::function<bool(Node*)>; using teller_t = std::function<bool(Node*)>;
enum class Type { kOp, kVar }; enum class Type { kOp, kVar };
enum class Role {
kUnknown, // No role,
kInput, // an input and will be retained,
kOutput, // an output and will be retained,
kIntermediate // will be removed after handler.
};
// this link to others // this link to others
PDNode& LinksTo(const std::vector<PDNode*>& others); PDNode& LinksTo(const std::vector<PDNode*>& others);
PDNode& LinksFrom(const std::vector<PDNode*>& others); PDNode& LinksFrom(const std::vector<PDNode*>& others);
bool Tell(Node* node) const { bool Tell(Node* node) const {
PADDLE_ENFORCE(teller_ != nullptr, "teller should be set for a PDNode"); if (teller_) return teller_(node);
return teller_(node);
for (auto& asrt : asserts_) {
if (!asrt(node)) return false;
}
return true;
} }
bool IsOp() const { return type_ == Type::kOp; } bool IsOp() const { return type_ == Type::kOp; }
...@@ -54,10 +64,52 @@ struct PDNode { ...@@ -54,10 +64,52 @@ struct PDNode {
const std::string& name() const { return name_; } const std::string& name() const { return name_; }
PDNode(const PDNode&) = delete;
PDNode& operator=(const PDNode&) = delete; PDNode& operator=(const PDNode&) = delete;
PDNode(const PDNode&) = delete;
// Mark this node is an Input of a subgraph and will be retained.
PDNode* AsInput() {
role_ = Role::kInput;
return this;
}
// Mark this node is an Output of a subgraph and will be retained.
PDNode* AsOutput() {
role_ = Role::kOutput;
return this;
}
// Mark this node will be removed, so all the links should be inside a matched
// sub-graph.
PDNode* AsIntermediate() {
role_ = Role::kIntermediate;
return this;
}
bool IsIntermediate() const { return role_ == Role::kIntermediate; }
bool IsInput() const { return role_ == Role::kInput; }
bool IsOutput() const { return role_ == Role::kOutput; }
// Assertions, helper functions to simplify the pattern definition.
PDNode* assert_is_op();
PDNode* assert_is_op(const std::string& op_type);
PDNode* assert_is_var();
PDNode* assert_var_not_persistable();
PDNode* assert_is_persistable_var();
PDNode* assert_is_op_output(const std::string& op_type);
PDNode* assert_is_op_input(const std::string& op_type);
PDNode* assert_is_op_nth_input(const std::string& op_type,
const std::string& argument, int nth);
PDNode* assert_is_op_nth_output(const std::string& op_type,
const std::string& argument, int nth);
PDNode* assert_is_only_input_of_op(const std::string& op_type);
PDNode* assert_is_only_output_of_op(const std::string& op_type);
PDNode* assert_op_has_n_inputs(const std::string& op_type, size_t n);
PDNode* assert_op_has_n_outputs(const std::string& op_type, size_t n);
PDNode* assert_more(teller_t&& teller);
private: private:
PDNode(PDPattern* pattern, const std::string& name = "",
Type type = Type::kVar)
: pattern_(pattern), name_(name), type_(type) {}
PDNode(teller_t&& teller, PDPattern* pattern, const std::string& name = "", PDNode(teller_t&& teller, PDPattern* pattern, const std::string& name = "",
Type type = Type::kVar) Type type = Type::kVar)
: teller_(std::move(teller)), : teller_(std::move(teller)),
...@@ -71,10 +123,13 @@ struct PDNode { ...@@ -71,10 +123,13 @@ struct PDNode {
friend class PDPattern; friend class PDPattern;
// Will removed latter.
teller_t teller_; teller_t teller_;
std::vector<teller_t> asserts_;
PDPattern* pattern_; PDPattern* pattern_;
std::string name_; std::string name_;
Type type_; Type type_;
Role role_{Role::kUnknown};
}; };
/* /*
...@@ -87,19 +142,18 @@ struct PDNode { ...@@ -87,19 +142,18 @@ struct PDNode {
* This pattern can be defined as with the following pseudo codes * This pattern can be defined as with the following pseudo codes
* *
* // Create two operator PDNodes. * // Create two operator PDNodes.
* MUL = PDPattern.NewNode() * MUL = PDPattern.NewNode().assert_is_op("mul");
* ELE = PDPattern.NewNode() * ELE = PDPattern.NewNode().assert_is_op("elementwise_add");
* // Create the variable PDNodes. * // Create the variable PDNodes.
* MUL_out = PDPattern.NewNode() * MUL_out = PDPattern.NewNode().assert_is_op_output("mul") \
* // Add teller to define some rules that help to filter the target Nodes. * .assert_is_op_input("elementwise_add") \
* MUL.teller = lambda(node): node->IsOp() && node->Op()->Type == "mul"; * .AsIntermediate();
* ELE.teller = lambda(node): \ * // Add relations.
* node->IsOp() && node->Op()->Type == "elementwise_add"; * MUL->LinksTo({MUL_out});
* MUL_out.teller = lambda(node): node->IsVar() && (MUL in node->inputs) * MUL_out->LinksTo({ELE});
* && (ELE in node->outputs)
* *
* One can add more specific tellers for PDNodes or edges, both the Operator * One can add more specific asserts for PDNodes or edges, both the Operator
* and Variable Nodes can be ruled in PDNode.teller. * and Variable Nodes can be ruled in PDNode.assert_more(...).
* *
* PDPattern can record the general patterns, such as the pattern represents * PDPattern can record the general patterns, such as the pattern represents
* - Op in CPU -> Op in GPU -> Op in CPU, to findout the IO abnormal place. * - Op in CPU -> Op in GPU -> Op in CPU, to findout the IO abnormal place.
...@@ -112,7 +166,8 @@ class PDPattern { ...@@ -112,7 +166,8 @@ class PDPattern {
void AddEdge(PDNode* a, PDNode* b); void AddEdge(PDNode* a, PDNode* b);
PDNode* NewNode(PDNode::teller_t&& teller, const std::string& name = NewID()); PDNode* NewNode(PDNode::teller_t&& teller, const std::string& name = NewID());
PDNode* RetriveNode(const std::string& id) const; PDNode* NewNode(const std::string& name = NewID());
PDNode* RetrieveNode(const std::string& id) const;
const std::vector<std::unique_ptr<PDNode>>& nodes() const { return nodes_; } const std::vector<std::unique_ptr<PDNode>>& nodes() const { return nodes_; }
const std::vector<edge_t>& edges() const { return edges_; } const std::vector<edge_t>& edges() const { return edges_; }
...@@ -185,6 +240,9 @@ class GraphPatternDetector { ...@@ -185,6 +240,9 @@ class GraphPatternDetector {
// Remove overlapped match subgraphs, when overlapped, keep the previous one. // Remove overlapped match subgraphs, when overlapped, keep the previous one.
void RemoveOverlappedMatch(std::vector<subgraph_t>* subgraphs); void RemoveOverlappedMatch(std::vector<subgraph_t>* subgraphs);
// Validate whether the intermediate nodes are linked by external nodes.
void ValidateByNodeRole(std::vector<subgraph_t>* subgraphs);
#ifdef PADDLE_WITH_TESTING #ifdef PADDLE_WITH_TESTING
FRIEND_TEST(GraphPatternDetecter, MarkPDNodesInGraph); FRIEND_TEST(GraphPatternDetecter, MarkPDNodesInGraph);
FRIEND_TEST(GraphPatternDetecter, DetectPatterns); FRIEND_TEST(GraphPatternDetecter, DetectPatterns);
...@@ -228,6 +286,14 @@ static bool IsNthInput(Node* var, Node* op, const std::string& argument, ...@@ -228,6 +286,14 @@ static bool IsNthInput(Node* var, Node* op, const std::string& argument,
return var->Name() == op->Op()->Input(argument)[nth]; return var->Name() == op->Op()->Input(argument)[nth];
} }
static bool IsNthOutput(Node* var, Node* op, const std::string& argument,
size_t nth) {
PADDLE_ENFORCE(var->IsVar());
PADDLE_ENFORCE(op->IsOp());
if (op->inputs.size() <= nth) return false;
return var->Name() == op->Op()->Output(argument)[nth];
}
static void GraphSafeRemoveNodes(Graph* graph, static void GraphSafeRemoveNodes(Graph* graph,
const std::unordered_set<const Node*>& nodes) { const std::unordered_set<const Node*>& nodes) {
for (auto* node : nodes) { for (auto* node : nodes) {
......
...@@ -167,6 +167,39 @@ TEST(GraphPatternDetecter, MultiSubgraph) { ...@@ -167,6 +167,39 @@ TEST(GraphPatternDetecter, MultiSubgraph) {
ASSERT_LE(count, 2); ASSERT_LE(count, 2);
} }
TEST(GraphPatternDetector, IntermediateCheck) {
ProgramDesc program;
Graph graph(program);
BuildGraph(&graph);
// o2->v2->o3
// o2->v2->o4
// check o2+o3 fuse, should fail because v2 also link to o4.
GraphPatternDetector detector;
auto* op2 = detector.mutable_pattern()->NewNode(
[](Node* x) { return x && x->IsOp() && x->Name() == "op2"; }, "op2");
auto* op3 = detector.mutable_pattern()->NewNode(
[](Node* x) { return x && x->IsOp() && x->Name() == "op3"; }, "op3");
auto* v2 =
detector.mutable_pattern()
->NewNode(
[](Node* x) { return x && x->IsVar() && x->Name() == "var2"; },
"var2")
->AsIntermediate();
v2->LinksFrom({op2}).LinksTo({op3});
int count = 0;
detector(&graph, [&](const GraphPatternDetector::subgraph_t& g,
Graph* graph) { ++count; });
EXPECT_EQ(count, 0);
count = 0;
v2->AsInput();
detector(&graph, [&](const GraphPatternDetector::subgraph_t& g,
Graph* graph) { ++count; });
ASSERT_EQ(count, 1);
}
} // namespace ir } // namespace ir
} // namespace framework } // namespace framework
} // namespace paddle } // namespace paddle
...@@ -16,13 +16,27 @@ limitations under the License. */ ...@@ -16,13 +16,27 @@ limitations under the License. */
#include <unordered_set> #include <unordered_set>
#include "paddle/fluid/framework/ir/graph_viz_pass.h" #include "paddle/fluid/framework/ir/graph_viz_pass.h"
#include "paddle/fluid/framework/op_proto_maker.h"
#include "paddle/fluid/inference/analysis/dot.h" #include "paddle/fluid/inference/analysis/dot.h"
#include "paddle/fluid/string/printf.h"
namespace paddle { namespace paddle {
namespace framework { namespace framework {
namespace ir { namespace ir {
static const char kGraphVizPath[] = "graph_viz_path";
using inference::analysis::Dot; using inference::analysis::Dot;
namespace {
const char kGraphVizPath[] = "graph_viz_path";
std::string FormatName(const Node* node) {
if (!node->IsOp() || !node->Op() ||
!node->Op()->HasAttr(OpProtoAndCheckerMaker::OpNamescopeAttrName())) {
return node->Name();
}
const std::string full_scope = boost::get<std::string>(
node->Op()->GetAttr(OpProtoAndCheckerMaker::OpNamescopeAttrName()));
return string::Sprintf("%s%s", full_scope.c_str(), node->Name().c_str());
}
} // namespace
std::unique_ptr<ir::Graph> GraphVizPass::ApplyImpl( std::unique_ptr<ir::Graph> GraphVizPass::ApplyImpl(
std::unique_ptr<ir::Graph> graph) const { std::unique_ptr<ir::Graph> graph) const {
...@@ -54,7 +68,7 @@ std::unique_ptr<ir::Graph> GraphVizPass::ApplyImpl( ...@@ -54,7 +68,7 @@ std::unique_ptr<ir::Graph> GraphVizPass::ApplyImpl(
auto marked_nodes = ConsumeMarkedNodes(graph.get()); auto marked_nodes = ConsumeMarkedNodes(graph.get());
// Create nodes // Create nodes
for (const Node* n : graph->Nodes()) { for (const Node* n : graph->Nodes()) {
std::string node_id = n->Name() + "(" + std::to_string(n->id()) + ")"; std::string node_id = FormatName(n) + "(" + std::to_string(n->id()) + ")";
if (n->IsOp()) { if (n->IsOp()) {
decltype(op_attrs) attr = decltype(op_attrs) attr =
marked_nodes.count(n) ? marked_op_attrs : op_attrs; marked_nodes.count(n) ? marked_op_attrs : op_attrs;
......
...@@ -55,11 +55,11 @@ class Node { ...@@ -55,11 +55,11 @@ class Node {
std::string Name() const { return name_; } std::string Name() const { return name_; }
VarDesc* Var() { VarDesc* Var() {
PADDLE_ENFORCE(type_ == Type::kVariable); PADDLE_ENFORCE(IsVar());
return var_desc_.get(); return var_desc_.get();
} }
OpDesc* Op() { OpDesc* Op() const {
PADDLE_ENFORCE(IsOp()); PADDLE_ENFORCE(IsOp());
return op_desc_.get(); return op_desc_.get();
} }
......
...@@ -180,16 +180,16 @@ PDNode* BuildFCPattern(PDPattern* pattern, PDNode* fc_x) { ...@@ -180,16 +180,16 @@ PDNode* BuildFCPattern(PDPattern* pattern, PDNode* fc_x) {
std::unique_ptr<ir::Graph> SeqConcatFcFusePass::ApplyImpl( std::unique_ptr<ir::Graph> SeqConcatFcFusePass::ApplyImpl(
std::unique_ptr<ir::Graph> graph) const { std::unique_ptr<ir::Graph> graph) const {
FusePassBase::Init(graph.get()); FusePassBase::Init("seq_concat_fc_fuse", graph.get());
GraphPatternDetector detector; GraphPatternDetector detector;
auto* pattern = detector.mutable_pattern(); auto* pattern = detector.mutable_pattern();
auto* concat_out = BuildSeqExpandConcatPattern(pattern); auto* concat_out = BuildSeqExpandConcatPattern(pattern);
BuildFCPattern(pattern, concat_out); BuildFCPattern(pattern, concat_out);
#define GET_NODE(id, pattern) \ #define GET_NODE(id, pattern) \
PADDLE_ENFORCE(subgraph.count(pattern.RetriveNode(#id)), \ PADDLE_ENFORCE(subgraph.count(pattern.RetrieveNode(#id)), \
"pattern has no Node called %s", #id); \ "pattern has no Node called %s", #id); \
auto* id = subgraph.at(pattern.RetriveNode(#id)); \ auto* id = subgraph.at(pattern.RetrieveNode(#id)); \
PADDLE_ENFORCE_NOT_NULL(id, "subgraph has no node %s", #id); PADDLE_ENFORCE_NOT_NULL(id, "subgraph has no node %s", #id);
detector(graph.get(), [&](const GraphPatternDetector::subgraph_t& subgraph, detector(graph.get(), [&](const GraphPatternDetector::subgraph_t& subgraph,
......
...@@ -129,6 +129,9 @@ void OpProtoAndCheckerMaker::operator()(proto::OpProto* proto, ...@@ -129,6 +129,9 @@ void OpProtoAndCheckerMaker::operator()(proto::OpProto* proto,
"Optimized for variable") "Optimized for variable")
.SetDefault({}); .SetDefault({});
AddAttr<std::string>(OpNamescopeAttrName(), "Operator name with namesope.")
.SetDefault("");
Validate(); Validate();
} }
......
...@@ -39,6 +39,7 @@ class OpProtoAndCheckerMaker { ...@@ -39,6 +39,7 @@ class OpProtoAndCheckerMaker {
public: public:
static const char *OpRoleAttrName() { return "op_role"; } static const char *OpRoleAttrName() { return "op_role"; }
static const char *OpRoleVarAttrName() { return "op_role_var"; } static const char *OpRoleVarAttrName() { return "op_role_var"; }
static const char *OpNamescopeAttrName() { return "op_namescope"; }
void operator()(proto::OpProto *proto, OpAttrChecker *attr_checker); void operator()(proto::OpProto *proto, OpAttrChecker *attr_checker);
......
...@@ -93,7 +93,6 @@ class DfgPassManagerImpl final : public DfgPassManager { ...@@ -93,7 +93,6 @@ class DfgPassManagerImpl final : public DfgPassManager {
void AddGraphvizDebugerPass(Pass* pass) { void AddGraphvizDebugerPass(Pass* pass) {
auto* debuger_pass = pass->CreateGraphvizDebugerPass(); auto* debuger_pass = pass->CreateGraphvizDebugerPass();
if (debuger_pass) { if (debuger_pass) {
LOG(INFO) << " - register debug pass [" << debuger_pass->repr() << "]";
Register(debuger_pass->repr(), debuger_pass); Register(debuger_pass->repr(), debuger_pass);
} }
} }
......
...@@ -16,10 +16,13 @@ ...@@ -16,10 +16,13 @@
#include <google/protobuf/text_format.h> #include <google/protobuf/text_format.h>
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include "paddle/fluid/framework/ir/fuse_pass_base.h"
#include "paddle/fluid/framework/ir/pass.h" #include "paddle/fluid/framework/ir/pass.h"
#include "paddle/fluid/inference/analysis/ut_helper.h" #include "paddle/fluid/inference/analysis/ut_helper.h"
#include "paddle/fluid/inference/api/analysis_predictor.h"
#include "paddle/fluid/inference/api/helper.h" #include "paddle/fluid/inference/api/helper.h"
#include "paddle/fluid/inference/api/paddle_inference_api.h" #include "paddle/fluid/inference/api/paddle_inference_api.h"
#include "paddle/fluid/inference/utils/singleton.h"
#include "paddle/fluid/platform/profiler.h" #include "paddle/fluid/platform/profiler.h"
DEFINE_string(infer_ditu_rnn_model, "", "model path for ditu RNN"); DEFINE_string(infer_ditu_rnn_model, "", "model path for ditu RNN");
...@@ -31,6 +34,8 @@ namespace paddle { ...@@ -31,6 +34,8 @@ namespace paddle {
namespace inference { namespace inference {
namespace analysis { namespace analysis {
using namespace framework;
TEST(Analyzer, analysis_without_tensorrt) { TEST(Analyzer, analysis_without_tensorrt) {
FLAGS_IA_enable_tensorrt_subgraph_engine = false; FLAGS_IA_enable_tensorrt_subgraph_engine = false;
Argument argument; Argument argument;
...@@ -311,6 +316,20 @@ void TestDituRNNPrediction(const std::string &model_path, ...@@ -311,6 +316,20 @@ void TestDituRNNPrediction(const std::string &model_path,
EXPECT_NEAR(data[i], base_data[i], 1e-3); EXPECT_NEAR(data[i], base_data[i], 1e-3);
} }
} }
if (use_analysis && activate_ir) {
AnalysisPredictor *analysis_predictor =
dynamic_cast<AnalysisPredictor *>(predictor.get());
auto &fuse_statis = analysis_predictor->analysis_argument()
.Get<std::unordered_map<std::string, int>>(
framework::ir::kFuseStatisAttr);
for (auto &item : fuse_statis) {
LOG(INFO) << "fused " << item.first << " " << item.second;
}
ASSERT_TRUE(fuse_statis.count("fc"));
EXPECT_EQ(fuse_statis.at("fc"), 1);
}
} }
// Directly infer with the original model. // Directly infer with the original model.
......
...@@ -64,7 +64,8 @@ struct Argument { ...@@ -64,7 +64,8 @@ struct Argument {
template <typename T> template <typename T>
void Set(const std::string& key, T* data) { void Set(const std::string& key, T* data) {
PADDLE_ENFORCE_NOT_NULL(data); PADDLE_ENFORCE_NOT_NULL(data);
PADDLE_ENFORCE(!attrs_.count(key), "duplicate attr called %s", key); PADDLE_ENFORCE(!attrs_.count(key), "Duplicate set Argument's attr [%s]",
key);
attrs_[key] = data; attrs_[key] = data;
attr_deleters_[key] = [data, key, this]() { attr_deleters_[key] = [data, key, this]() {
VLOG(3) << "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"; VLOG(3) << "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
......
此差异已折叠。
...@@ -24,7 +24,7 @@ limitations under the License. */ ...@@ -24,7 +24,7 @@ limitations under the License. */
#endif #endif
#ifdef PADDLE_WITH_MKLDNN #ifdef PADDLE_WITH_MKLDNN
#include <mkldnn.hpp> #include "mkldnn.hpp"
#endif #endif
#include <map> #include <map>
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册