Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
book
提交
db98be4b
B
book
项目概览
PaddlePaddle
/
book
通知
17
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
40
列表
看板
标记
里程碑
合并请求
37
Wiki
5
Wiki
分析
仓库
DevOps
项目成员
Pages
B
book
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
40
Issue
40
列表
看板
标记
里程碑
合并请求
37
合并请求
37
Pages
分析
分析
仓库分析
DevOps
Wiki
5
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
db98be4b
编写于
7月 17, 2018
作者:
C
Chen Weihang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix: fix bug which inferencer can't be created and stacked_lstm_net code error
上级
d4414cbb
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
26 addition
and
24 deletion
+26
-24
06.understand_sentiment/README.cn.md
06.understand_sentiment/README.cn.md
+10
-12
06.understand_sentiment/README.md
06.understand_sentiment/README.md
+3
-0
06.understand_sentiment/index.cn.html
06.understand_sentiment/index.cn.html
+10
-12
06.understand_sentiment/index.html
06.understand_sentiment/index.html
+3
-0
未找到文件。
06.understand_sentiment/README.cn.md
浏览文件 @
db98be4b
...
@@ -107,6 +107,7 @@ Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取
...
@@ -107,6 +107,7 @@ Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取
在该示例中,我们实现了两种文本分类算法,分别基于
[
推荐系统
](
https://github.com/PaddlePaddle/book/tree/develop/05.recommender_system
)
一节介绍过的文本卷积神经网络,以及
[
栈式双向LSTM
](
#栈式双向LSTM(Stacked
Bidirectional LSTM))。我们首先引入要用到的库和定义全局变量:
在该示例中,我们实现了两种文本分类算法,分别基于
[
推荐系统
](
https://github.com/PaddlePaddle/book/tree/develop/05.recommender_system
)
一节介绍过的文本卷积神经网络,以及
[
栈式双向LSTM
](
#栈式双向LSTM(Stacked
Bidirectional LSTM))。我们首先引入要用到的库和定义全局变量:
```
python
```
python
from
__future__
import
print_function
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
from
functools
import
partial
from
functools
import
partial
...
@@ -115,6 +116,7 @@ import numpy as np
...
@@ -115,6 +116,7 @@ import numpy as np
CLASS_DIM
=
2
CLASS_DIM
=
2
EMB_DIM
=
128
EMB_DIM
=
128
HID_DIM
=
512
HID_DIM
=
512
STACKED_NUM
=
3
BATCH_SIZE
=
128
BATCH_SIZE
=
128
USE_GPU
=
False
USE_GPU
=
False
```
```
...
@@ -168,17 +170,12 @@ def stacked_lstm_net(data, input_dim, class_dim, emb_dim, hid_dim, stacked_num):
...
@@ -168,17 +170,12 @@ def stacked_lstm_net(data, input_dim, class_dim, emb_dim, hid_dim, stacked_num):
input
=
fc
,
size
=
hid_dim
,
is_reverse
=
(
i
%
2
)
==
0
)
input
=
fc
,
size
=
hid_dim
,
is_reverse
=
(
i
%
2
)
==
0
)
inputs
=
[
fc
,
lstm
]
inputs
=
[
fc
,
lstm
]
fc_last
=
paddle
.
layer
.
pooling
(
input
=
inputs
[
0
],
pooling_type
=
paddle
.
pooling
.
Max
())
fc_last
=
fluid
.
layers
.
sequence_pool
(
input
=
inputs
[
0
],
pool_type
=
'max'
)
lstm_last
=
paddle
.
layer
.
pooling
(
input
=
inputs
[
1
],
pooling_type
=
paddle
.
pooling
.
Max
())
lstm_last
=
fluid
.
layers
.
sequence_pool
(
input
=
inputs
[
1
],
pool_type
=
'max'
)
output
=
paddle
.
layer
.
fc
(
input
=
[
fc_last
,
lstm_last
],
size
=
class_dim
,
prediction
=
fluid
.
layers
.
fc
(
act
=
paddle
.
activation
.
Softmax
(),
input
=
[
fc_last
,
lstm_last
],
size
=
class_dim
,
act
=
'softmax'
)
bias_attr
=
bias_attr
,
return
prediction
param_attr
=
para_attr
)
lbl
=
paddle
.
layer
.
data
(
"label"
,
paddle
.
data_type
.
integer_value
(
2
))
cost
=
paddle
.
layer
.
classification_cost
(
input
=
output
,
label
=
lbl
)
return
cost
,
output
```
```
以上的栈式双向LSTM抽象出了高级特征并把其映射到和分类类别数同样大小的向量上。
`paddle.activation.Softmax`
函数用来计算分类属于某个类别的概率。
以上的栈式双向LSTM抽象出了高级特征并把其映射到和分类类别数同样大小的向量上。
`paddle.activation.Softmax`
函数用来计算分类属于某个类别的概率。
...
@@ -193,6 +190,7 @@ def inference_program(word_dict):
...
@@ -193,6 +190,7 @@ def inference_program(word_dict):
dict_dim
=
len
(
word_dict
)
dict_dim
=
len
(
word_dict
)
net
=
convolution_net
(
data
,
dict_dim
,
CLASS_DIM
,
EMB_DIM
,
HID_DIM
)
net
=
convolution_net
(
data
,
dict_dim
,
CLASS_DIM
,
EMB_DIM
,
HID_DIM
)
# net = stacked_lstm_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM, STACKED_NUM)
return
net
return
net
```
```
...
@@ -301,7 +299,7 @@ trainer.train(
...
@@ -301,7 +299,7 @@ trainer.train(
```
python
```
python
inferencer
=
fluid
.
Inferencer
(
inferencer
=
fluid
.
Inferencer
(
infer
ence_program
,
param_path
=
params_dirname
,
place
=
place
)
infer
_func
=
partial
(
inference_program
,
word_dict
)
,
param_path
=
params_dirname
,
place
=
place
)
```
```
### 生成测试用输入数据
### 生成测试用输入数据
...
...
06.understand_sentiment/README.md
浏览文件 @
db98be4b
...
@@ -103,6 +103,7 @@ After issuing a command `python train.py`, training will start immediately. The
...
@@ -103,6 +103,7 @@ After issuing a command `python train.py`, training will start immediately. The
Our program starts with importing necessary packages and initializing some global variables:
Our program starts with importing necessary packages and initializing some global variables:
```
python
```
python
from
__future__
import
print_function
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
from
functools
import
partial
from
functools
import
partial
...
@@ -111,6 +112,7 @@ import numpy as np
...
@@ -111,6 +112,7 @@ import numpy as np
CLASS_DIM
=
2
CLASS_DIM
=
2
EMB_DIM
=
128
EMB_DIM
=
128
HID_DIM
=
512
HID_DIM
=
512
STACKED_NUM
=
3
BATCH_SIZE
=
128
BATCH_SIZE
=
128
USE_GPU
=
False
USE_GPU
=
False
```
```
...
@@ -192,6 +194,7 @@ def inference_program(word_dict):
...
@@ -192,6 +194,7 @@ def inference_program(word_dict):
dict_dim
=
len
(
word_dict
)
dict_dim
=
len
(
word_dict
)
net
=
convolution_net
(
data
,
dict_dim
,
CLASS_DIM
,
EMB_DIM
,
HID_DIM
)
net
=
convolution_net
(
data
,
dict_dim
,
CLASS_DIM
,
EMB_DIM
,
HID_DIM
)
# net = stacked_lstm_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM, STACKED_NUM)
return
net
return
net
```
```
...
...
06.understand_sentiment/index.cn.html
浏览文件 @
db98be4b
...
@@ -149,6 +149,7 @@ Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取
...
@@ -149,6 +149,7 @@ Paddle在`dataset/imdb.py`中提实现了imdb数据集的自动下载和读取
在该示例中,我们实现了两种文本分类算法,分别基于[推荐系统](https://github.com/PaddlePaddle/book/tree/develop/05.recommender_system)一节介绍过的文本卷积神经网络,以及[栈式双向LSTM](#栈式双向LSTM(Stacked Bidirectional LSTM))。我们首先引入要用到的库和定义全局变量:
在该示例中,我们实现了两种文本分类算法,分别基于[推荐系统](https://github.com/PaddlePaddle/book/tree/develop/05.recommender_system)一节介绍过的文本卷积神经网络,以及[栈式双向LSTM](#栈式双向LSTM(Stacked Bidirectional LSTM))。我们首先引入要用到的库和定义全局变量:
```python
```python
from __future__ import print_function
import paddle
import paddle
import paddle.fluid as fluid
import paddle.fluid as fluid
from functools import partial
from functools import partial
...
@@ -157,6 +158,7 @@ import numpy as np
...
@@ -157,6 +158,7 @@ import numpy as np
CLASS_DIM = 2
CLASS_DIM = 2
EMB_DIM = 128
EMB_DIM = 128
HID_DIM = 512
HID_DIM = 512
STACKED_NUM = 3
BATCH_SIZE = 128
BATCH_SIZE = 128
USE_GPU = False
USE_GPU = False
```
```
...
@@ -210,17 +212,12 @@ def stacked_lstm_net(data, input_dim, class_dim, emb_dim, hid_dim, stacked_num):
...
@@ -210,17 +212,12 @@ def stacked_lstm_net(data, input_dim, class_dim, emb_dim, hid_dim, stacked_num):
input=fc, size=hid_dim, is_reverse=(i % 2) == 0)
input=fc, size=hid_dim, is_reverse=(i % 2) == 0)
inputs = [fc, lstm]
inputs = [fc, lstm]
fc_last = paddle.layer.pooling(input=inputs[0], pooling_type=paddle.pooling.Max())
fc_last = fluid.layers.sequence_pool(input=inputs[0], pool_type='max')
lstm_last = paddle.layer.pooling(input=inputs[1], pooling_type=paddle.pooling.Max())
lstm_last = fluid.layers.sequence_pool(input=inputs[1], pool_type='max')
output = paddle.layer.fc(input=[fc_last, lstm_last],
size=class_dim,
prediction = fluid.layers.fc(
act=paddle.activation.Softmax(),
input=[fc_last, lstm_last], size=class_dim, act='softmax')
bias_attr=bias_attr,
return prediction
param_attr=para_attr)
lbl = paddle.layer.data("label", paddle.data_type.integer_value(2))
cost = paddle.layer.classification_cost(input=output, label=lbl)
return cost, output
```
```
以上的栈式双向LSTM抽象出了高级特征并把其映射到和分类类别数同样大小的向量上。`paddle.activation.Softmax`函数用来计算分类属于某个类别的概率。
以上的栈式双向LSTM抽象出了高级特征并把其映射到和分类类别数同样大小的向量上。`paddle.activation.Softmax`函数用来计算分类属于某个类别的概率。
...
@@ -235,6 +232,7 @@ def inference_program(word_dict):
...
@@ -235,6 +232,7 @@ def inference_program(word_dict):
dict_dim = len(word_dict)
dict_dim = len(word_dict)
net = convolution_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM)
net = convolution_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM)
# net = stacked_lstm_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM, STACKED_NUM)
return net
return net
```
```
...
@@ -343,7 +341,7 @@ trainer.train(
...
@@ -343,7 +341,7 @@ trainer.train(
```python
```python
inferencer = fluid.Inferencer(
inferencer = fluid.Inferencer(
infer
ence_program
, param_path=params_dirname, place=place)
infer
_func=partial(inference_program, word_dict)
, param_path=params_dirname, place=place)
```
```
### 生成测试用输入数据
### 生成测试用输入数据
...
...
06.understand_sentiment/index.html
浏览文件 @
db98be4b
...
@@ -145,6 +145,7 @@ After issuing a command `python train.py`, training will start immediately. The
...
@@ -145,6 +145,7 @@ After issuing a command `python train.py`, training will start immediately. The
Our program starts with importing necessary packages and initializing some global variables:
Our program starts with importing necessary packages and initializing some global variables:
```python
```python
from __future__ import print_function
import paddle
import paddle
import paddle.fluid as fluid
import paddle.fluid as fluid
from functools import partial
from functools import partial
...
@@ -153,6 +154,7 @@ import numpy as np
...
@@ -153,6 +154,7 @@ import numpy as np
CLASS_DIM = 2
CLASS_DIM = 2
EMB_DIM = 128
EMB_DIM = 128
HID_DIM = 512
HID_DIM = 512
STACKED_NUM = 3
BATCH_SIZE = 128
BATCH_SIZE = 128
USE_GPU = False
USE_GPU = False
```
```
...
@@ -234,6 +236,7 @@ def inference_program(word_dict):
...
@@ -234,6 +236,7 @@ def inference_program(word_dict):
dict_dim = len(word_dict)
dict_dim = len(word_dict)
net = convolution_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM)
net = convolution_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM)
# net = stacked_lstm_net(data, dict_dim, CLASS_DIM, EMB_DIM, HID_DIM, STACKED_NUM)
return net
return net
```
```
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录