Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleRec
提交
9a12e113
P
PaddleRec
项目概览
PaddlePaddle
/
PaddleRec
通知
68
Star
12
Fork
5
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
27
列表
看板
标记
里程碑
合并请求
10
Wiki
1
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleRec
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
27
Issue
27
列表
看板
标记
里程碑
合并请求
10
合并请求
10
Pages
分析
分析
仓库分析
DevOps
Wiki
1
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
9a12e113
编写于
9月 09, 2020
作者:
T
tangwei12
提交者:
GitHub
9月 09, 2020
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'master' into fix_collective_files_partition
上级
c8888222
b619c193
变更
22
展开全部
隐藏空白更改
内联
并排
Showing
22 changed file
with
856 addition
and
15 deletion
+856
-15
doc/pre_train_model.md
doc/pre_train_model.md
+20
-2
doc/yaml.md
doc/yaml.md
+2
-0
models/contentunderstanding/readme.md
models/contentunderstanding/readme.md
+6
-6
models/contentunderstanding/textcnn/__init__.py
models/contentunderstanding/textcnn/__init__.py
+0
-0
models/contentunderstanding/textcnn/config.yaml
models/contentunderstanding/textcnn/config.yaml
+1
-1
models/contentunderstanding/textcnn/data/preprocess.py
models/contentunderstanding/textcnn/data/preprocess.py
+0
-0
models/contentunderstanding/textcnn/data/test/test.txt
models/contentunderstanding/textcnn/data/test/test.txt
+0
-0
models/contentunderstanding/textcnn/data/train/train.txt
models/contentunderstanding/textcnn/data/train/train.txt
+0
-0
models/contentunderstanding/textcnn/model.py
models/contentunderstanding/textcnn/model.py
+0
-0
models/contentunderstanding/textcnn/reader.py
models/contentunderstanding/textcnn/reader.py
+0
-0
models/contentunderstanding/textcnn/readme.md
models/contentunderstanding/textcnn/readme.md
+4
-5
models/contentunderstanding/textcnn_pretrain/__init__.py
models/contentunderstanding/textcnn_pretrain/__init__.py
+13
-0
models/contentunderstanding/textcnn_pretrain/basemodel.py
models/contentunderstanding/textcnn_pretrain/basemodel.py
+118
-0
models/contentunderstanding/textcnn_pretrain/config.yaml
models/contentunderstanding/textcnn_pretrain/config.yaml
+70
-0
models/contentunderstanding/textcnn_pretrain/data/preprocess.py
.../contentunderstanding/textcnn_pretrain/data/preprocess.py
+67
-0
models/contentunderstanding/textcnn_pretrain/data/test/test.txt
.../contentunderstanding/textcnn_pretrain/data/test/test.txt
+20
-0
models/contentunderstanding/textcnn_pretrain/data/train/train.txt
...ontentunderstanding/textcnn_pretrain/data/train/train.txt
+100
-0
models/contentunderstanding/textcnn_pretrain/finetune_startup.py
...contentunderstanding/textcnn_pretrain/finetune_startup.py
+154
-0
models/contentunderstanding/textcnn_pretrain/model.py
models/contentunderstanding/textcnn_pretrain/model.py
+92
-0
models/contentunderstanding/textcnn_pretrain/reader.py
models/contentunderstanding/textcnn_pretrain/reader.py
+43
-0
models/contentunderstanding/textcnn_pretrain/readme.md
models/contentunderstanding/textcnn_pretrain/readme.md
+145
-0
tools/build_script.sh
tools/build_script.sh
+1
-1
未找到文件。
doc/pre_train_model.md
浏览文件 @
9a12e113
...
...
@@ -7,9 +7,27 @@ PaddleRec基于业务实践,使用真实数据,产出了推荐领域算法
### 获取地址
```
bash
wget
xxx
.tar.gz
wget
https://paddlerec.bj.bcebos.com/textcnn_pretrain%2Fpretrain_model
.tar.gz
```
### 使用方法
解压后,得到的是一个paddle的模型文件夹,使用
`PaddleRec/models/contentunderstanding/classification_finetue`
模型进行加载
解压后,得到的是一个paddle的模型文件夹,使用
`PaddleRec/models/contentunderstanding/textcnn`
模型进行加载
您可以在PaddleRec/models/contentunderstanding/textcnn_pretrain中找到finetune_startup.py文件,在config.yaml中配置startup_class_path和init_pretraining_model_path两个参数。
在参数startup_class_path中配置finetune_startup.py文件的地址,在init_pretraining_model_path参数中配置您要加载的参数文件。
以textcnn_pretrain为例,配置完的runner如下:
```
runner:
- name: train_runner
class: train
epochs: 6
device: cpu
save_checkpoint_interval: 1
save_checkpoint_path: "increment"
init_model_path: ""
print_interval: 10
startup_class_path: "{workspace}/finetune_startup.py"
init_pretraining_model_path: "{workspace}/pretrain_model/pretrain_model_params"
phases: phase_train
```
具体使用方法请参照textcnn
[
使用预训练模型进行finetune
](
https://github.com/PaddlePaddle/PaddleRec/tree/master/models/contentunderstanding/textcnn_pretrain
)
doc/yaml.md
浏览文件 @
9a12e113
...
...
@@ -37,6 +37,8 @@
| startup_class_path | string | 路径 | 否 | 自定义startup流程实现的地址 |
| runner_class_path | string | 路径 | 否 | 自定义runner流程实现的地址 |
| terminal_class_path | string | 路径 | 否 | 自定义terminal流程实现的地址 |
| init_pretraining_model_path | string | 路径 | 否 |自定义的startup流程中需要传入这个参数,finetune中需要加载的参数的地址 |
...
...
models/contentunderstanding/readme.md
浏览文件 @
9a12e113
# 内容理解模型库
## 简介
我们提供了常见的内容理解任务中使用的模型算法的PaddleRec实现, 单机训练&预测效果指标以及分布式训练&预测性能指标等。实现的内容理解模型包括
[
Tagspace
](
tagspace
)
、
[
文本分类
](
classificatio
n
)
等。
我们提供了常见的内容理解任务中使用的模型算法的PaddleRec实现, 单机训练&预测效果指标以及分布式训练&预测性能指标等。实现的内容理解模型包括
[
Tagspace
](
tagspace
)
、
[
文本分类
](
textcnn
)
、
[
基于textcnn的预训练模型
](
textcnn_pretrai
n
)
等。
模型算法库在持续添加中,欢迎关注。
...
...
@@ -23,7 +23,7 @@
| 模型 | 简介 | 论文 |
| :------------------: | :--------------------: | :---------: |
| TagSpace | 标签推荐 |
[
EMNLP 2014
][
TagSpace: Semantic Embeddings from Hashtags
]
(https://www.aclweb.org/anthology/D14-1194.pdf) |
|
Classificatio
n | 文本分类 |
[
EMNLP 2014
][
Convolutional neural networks for sentence classication
]
(https://www.aclweb.org/anthology/D14-1181.pdf) |
|
textcn
n | 文本分类 |
[
EMNLP 2014
][
Convolutional neural networks for sentence classication
]
(https://www.aclweb.org/anthology/D14-1181.pdf) |
下面是每个模型的简介(注:图片引用自链接中的论文)
...
...
@@ -32,7 +32,7 @@
<img
align=
"center"
src=
"../../doc/imgs/tagspace.png"
>
<p>
[
文本分类
CNN模型
](
https://www.aclweb.org/anthology/D14-1181.pdf
)
[
text
CNN模型
](
https://www.aclweb.org/anthology/D14-1181.pdf
)
<p
align=
"center"
>
<img
align=
"center"
src=
"../../doc/imgs/cnn-ckim2014.png"
>
<p>
...
...
@@ -42,7 +42,7 @@
git clone https://github.com/PaddlePaddle/PaddleRec.git paddle-rec
cd PaddleRec
python -m paddlerec.run -m models/contentunderstanding/tagspace/config.yaml
python -m paddlerec.run -m models/contentunderstanding/
classificatio
n/config.yaml
python -m paddlerec.run -m models/contentunderstanding/
textcn
n/config.yaml
```
## 使用教程(复现论文)
...
...
@@ -134,7 +134,7 @@ batch: 13, acc: [0.928], loss: [0.01736144]
batch: 14, acc: [0.93], loss: [0.01911209]
```
**(2)
Classificatio
n**
**(2)
textcn
n**
### 数据处理
情感倾向分析(Sentiment Classification,简称Senta)针对带有主观描述的中文文本,可自动判断该文本的情感极性类别并给出相应的置信度。情感类型分为积极、消极。情感倾向分析能够帮助企业理解用户消费习惯、分析热点话题和危机舆情监控,为企业提供有利的决策支持。
...
...
@@ -206,4 +206,4 @@ batch: 3, acc: [0.90234375], loss: [0.27907994]
| 数据集 | 模型 | loss | acc |
| :------------------: | :--------------------: | :---------: |:---------: |
| ag news dataset | TagSpace | 0.0198 | 0.9177 |
| ChnSentiCorp |
Classificatio
n | 0.2282 | 0.9127 |
| ChnSentiCorp |
textcn
n | 0.2282 | 0.9127 |
models/contentunderstanding/
classificatio
n/__init__.py
→
models/contentunderstanding/
textcn
n/__init__.py
浏览文件 @
9a12e113
文件已移动
models/contentunderstanding/
classificatio
n/config.yaml
→
models/contentunderstanding/
textcn
n/config.yaml
浏览文件 @
9a12e113
...
...
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
workspace
:
"
models/contentunderstanding/
classificatio
n"
workspace
:
"
models/contentunderstanding/
textcn
n"
dataset
:
-
name
:
data1
...
...
models/contentunderstanding/
classificatio
n/data/preprocess.py
→
models/contentunderstanding/
textcn
n/data/preprocess.py
浏览文件 @
9a12e113
文件已移动
models/contentunderstanding/
classificatio
n/data/test/test.txt
→
models/contentunderstanding/
textcn
n/data/test/test.txt
浏览文件 @
9a12e113
文件已移动
models/contentunderstanding/
classificatio
n/data/train/train.txt
→
models/contentunderstanding/
textcn
n/data/train/train.txt
浏览文件 @
9a12e113
文件已移动
models/contentunderstanding/
classificatio
n/model.py
→
models/contentunderstanding/
textcn
n/model.py
浏览文件 @
9a12e113
文件已移动
models/contentunderstanding/
classificatio
n/reader.py
→
models/contentunderstanding/
textcn
n/reader.py
浏览文件 @
9a12e113
文件已移动
models/contentunderstanding/
classificatio
n/readme.md
→
models/contentunderstanding/
textcn
n/readme.md
浏览文件 @
9a12e113
#
classificatio
n文本分类模型
#
textcn
n文本分类模型
以下是本例的简要目录结构及说明:
```
├── data #样例数据
├── train
├── train.txt #训练数据样例
├── train.txt #训练数据样例
├── test
├── test.txt #测试数据样例
├── preprocess.py #数据处理程序
...
...
@@ -15,7 +15,6 @@
├── config.yaml #配置文件
├── reader.py #读取程序
```
注:在阅读该示例前,建议您先了解以下内容:
[
paddlerec入门教程
](
https://github.com/PaddlePaddle/PaddleRec/blob/master/README.md
)
...
...
@@ -73,13 +72,13 @@ os : windows/linux/macos
本文提供了样例数据可以供您快速体验,在paddlerec目录下直接执行下面的命令即可启动训练:
```
python -m paddlerec.run -m models/contentunderstanding/
classificatio
n/config.yaml
python -m paddlerec.run -m models/contentunderstanding/
textcn
n/config.yaml
```
## 效果复现
为了方便使用者能够快速的跑通每一个模型,我们在每个模型下都提供了样例数据。如果需要复现readme中的效果,请按如下步骤依次操作即可。
1.
确认您当前所在目录为PaddleRec/models/contentunderstanding/
classificatio
n
1.
确认您当前所在目录为PaddleRec/models/contentunderstanding/
textcn
n
2.
下载并解压数据集,命令如下:
```
wget https://baidu-nlp.bj.bcebos.com/sentiment_classification-dataset-1.0.0.tar.gz
...
...
models/contentunderstanding/textcnn_pretrain/__init__.py
0 → 100644
浏览文件 @
9a12e113
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
models/contentunderstanding/textcnn_pretrain/basemodel.py
0 → 100644
浏览文件 @
9a12e113
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import
paddle.fluid
as
fluid
from
paddlerec.core.utils
import
envs
from
paddlerec.core.model
import
ModelBase
from
paddlerec.core.metrics
import
RecallK
class
Model
(
ModelBase
):
def
__init__
(
self
,
config
):
ModelBase
.
__init__
(
self
,
config
)
self
.
dict_size
=
2000000
+
1
self
.
max_seq_len
=
1024
self
.
emb_dim
=
128
self
.
cnn_hid_dim
=
128
self
.
cnn_win_size
=
3
self
.
cnn_win_size2
=
5
self
.
hid_dim1
=
96
self
.
class_dim
=
30
self
.
is_sparse
=
True
def
input_data
(
self
,
is_infer
=
False
,
**
kwargs
):
text
=
fluid
.
data
(
name
=
"text"
,
shape
=
[
None
,
self
.
max_seq_len
,
1
],
dtype
=
'int64'
)
label
=
fluid
.
data
(
name
=
"category"
,
shape
=
[
None
,
1
],
dtype
=
'int64'
)
seq_len
=
fluid
.
data
(
name
=
"seq_len"
,
shape
=
[
None
],
dtype
=
'int64'
)
return
[
text
,
label
,
seq_len
]
def
net
(
self
,
inputs
,
is_infer
=
False
):
""" network definition """
#text label
self
.
data
=
inputs
[
0
]
self
.
label
=
inputs
[
1
]
self
.
seq_len
=
inputs
[
2
]
emb
=
embedding
(
self
.
data
,
self
.
dict_size
,
self
.
emb_dim
,
self
.
is_sparse
)
concat
=
multi_convs
(
emb
,
self
.
seq_len
,
self
.
cnn_hid_dim
,
self
.
cnn_win_size
,
self
.
cnn_win_size2
)
self
.
fc_1
=
full_connect
(
concat
,
self
.
hid_dim1
)
self
.
metrics
(
is_infer
)
def
metrics
(
self
,
is_infer
=
False
):
""" classification and metrics """
# softmax layer
prediction
=
fluid
.
layers
.
fc
(
input
=
[
self
.
fc_1
],
size
=
self
.
class_dim
,
act
=
"softmax"
,
name
=
"pretrain_fc_1"
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
self
.
label
)
avg_cost
=
fluid
.
layers
.
mean
(
x
=
cost
)
acc
=
fluid
.
layers
.
accuracy
(
input
=
prediction
,
label
=
self
.
label
)
#acc = RecallK(input=prediction, label=label, k=1)
self
.
_cost
=
avg_cost
if
is_infer
:
self
.
_infer_results
[
"acc"
]
=
acc
else
:
self
.
_metrics
[
"acc"
]
=
acc
def
embedding
(
inputs
,
dict_size
,
emb_dim
,
is_sparse
):
""" embeding definition """
emb
=
fluid
.
layers
.
embedding
(
input
=
inputs
,
size
=
[
dict_size
,
emb_dim
],
is_sparse
=
is_sparse
,
param_attr
=
fluid
.
ParamAttr
(
name
=
'pretrain_word_embedding'
,
initializer
=
fluid
.
initializer
.
Xavier
()))
return
emb
def
multi_convs
(
input_layer
,
seq_len
,
cnn_hid_dim
,
cnn_win_size
,
cnn_win_size2
):
"""conv and concat"""
emb
=
fluid
.
layers
.
sequence_unpad
(
input_layer
,
length
=
seq_len
,
name
=
"pretrain_unpad"
)
conv
=
fluid
.
nets
.
sequence_conv_pool
(
param_attr
=
fluid
.
ParamAttr
(
name
=
"pretrain_conv0_w"
),
bias_attr
=
fluid
.
ParamAttr
(
name
=
"pretrain_conv0_b"
),
input
=
emb
,
num_filters
=
cnn_hid_dim
,
filter_size
=
cnn_win_size
,
act
=
"tanh"
,
pool_type
=
"max"
)
conv2
=
fluid
.
nets
.
sequence_conv_pool
(
param_attr
=
fluid
.
ParamAttr
(
name
=
"pretrain_conv1_w"
),
bias_attr
=
fluid
.
ParamAttr
(
name
=
"pretrain_conv1_b"
),
input
=
emb
,
num_filters
=
cnn_hid_dim
,
filter_size
=
cnn_win_size2
,
act
=
"tanh"
,
pool_type
=
"max"
)
concat
=
fluid
.
layers
.
concat
(
input
=
[
conv
,
conv2
],
axis
=
1
,
name
=
"pretrain_concat"
)
return
concat
def
full_connect
(
input_layer
,
hid_dim1
):
"""full connect layer"""
fc_1
=
fluid
.
layers
.
fc
(
name
=
"pretrain_fc_0"
,
input
=
input_layer
,
size
=
hid_dim1
,
act
=
"tanh"
)
return
fc_1
models/contentunderstanding/textcnn_pretrain/config.yaml
0 → 100644
浏览文件 @
9a12e113
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
workspace
:
"
models/contentunderstanding/textcnn_pretrain"
dataset
:
-
name
:
dataset_train
batch_size
:
128
type
:
DataLoader
data_path
:
"
{workspace}/senta_data/train"
data_converter
:
"
{workspace}/reader.py"
-
name
:
dataset_infer
batch_size
:
256
type
:
DataLoader
data_path
:
"
{workspace}/senta_data/test"
data_converter
:
"
{workspace}/reader.py"
hyper_parameters
:
optimizer
:
class
:
adam
learning_rate
:
0.001
strategy
:
async
mode
:
[
train_runner
,
infer_runner
]
runner
:
-
name
:
train_runner
class
:
train
epochs
:
6
device
:
cpu
save_checkpoint_interval
:
1
save_checkpoint_path
:
"
increment"
init_model_path
:
"
"
print_interval
:
10
# startup class for finetuning
startup_class_path
:
"
{workspace}/finetune_startup.py"
# path of pretrained model. Please set empty if you don't use finetune function.
init_pretraining_model_path
:
"
{workspace}/pretrain_model/pretrain_model_params"
phases
:
phase_train
-
name
:
infer_runner
class
:
infer
# device to run training or infer
device
:
cpu
print_interval
:
1
init_model_path
:
"
increment/3"
# load model path
phases
:
phase_infer
phase
:
-
name
:
phase_train
model
:
"
{workspace}/model.py"
dataset_name
:
dataset_train
thread_num
:
1
-
name
:
phase_infer
model
:
"
{workspace}/model.py"
# user-defined model
dataset_name
:
dataset_infer
# select dataset by name
thread_num
:
1
models/contentunderstanding/textcnn_pretrain/data/preprocess.py
0 → 100644
浏览文件 @
9a12e113
# encoding=utf-8
import
os
import
sys
def
build_word_dict
():
word_file
=
"word_dict.txt"
f
=
open
(
word_file
,
"r"
)
word_dict
=
{}
lines
=
f
.
readlines
()
for
line
in
lines
:
word
=
line
.
strip
().
split
(
"
\t
"
)
word_dict
[
word
[
0
]]
=
word
[
1
]
f
.
close
()
return
word_dict
def
build_token_data
(
word_dict
,
txt_file
,
token_file
):
max_text_size
=
100
f
=
open
(
txt_file
,
"r"
)
fout
=
open
(
token_file
,
"w"
)
lines
=
f
.
readlines
()
i
=
0
for
line
in
lines
:
line
=
line
.
strip
(
"
\n
"
).
split
(
"
\t
"
)
text
=
line
[
0
].
strip
(
"
\n
"
).
split
(
" "
)
tokens
=
[]
label
=
line
[
1
]
for
word
in
text
:
if
word
in
word_dict
:
tokens
.
append
(
str
(
word_dict
[
word
]))
else
:
tokens
.
append
(
"0"
)
seg_len
=
len
(
tokens
)
if
seg_len
<
5
:
continue
if
seg_len
>=
max_text_size
:
tokens
=
tokens
[:
max_text_size
]
seg_len
=
max_text_size
else
:
tokens
=
tokens
+
[
"0"
]
*
(
max_text_size
-
seg_len
)
text_tokens
=
" "
.
join
(
tokens
)
fout
.
write
(
text_tokens
+
" "
+
str
(
seg_len
)
+
" "
+
label
+
"
\n
"
)
if
(
i
+
1
)
%
100
==
0
:
print
(
str
(
i
+
1
)
+
" lines OK"
)
i
+=
1
fout
.
close
()
f
.
close
()
word_dict
=
build_word_dict
()
txt_file
=
"test.tsv"
token_file
=
"test.txt"
build_token_data
(
word_dict
,
txt_file
,
token_file
)
txt_file
=
"dev.tsv"
token_file
=
"dev.txt"
build_token_data
(
word_dict
,
txt_file
,
token_file
)
txt_file
=
"train.tsv"
token_file
=
"train.txt"
build_token_data
(
word_dict
,
txt_file
,
token_file
)
models/contentunderstanding/textcnn_pretrain/data/test/test.txt
0 → 100644
浏览文件 @
9a12e113
5681 17044 4352 7574 16576 3574 32952 12211 18835 28961 15320 2019 21675 30604 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 14 1
9054 31881 4449 12211 12488 5975 3574 28592 2547 2547 14132 3574 24908 5975 24285 10010 3574 31872 20925 9886 12211 26530 3567 30818 19640 22506 28312 19887 12211 28212 8576 3574 28592 12306 14132 539 33049 9039 14160 113 3567 19675 5511 2111 623 12068 12211 3574 18416 12068 19680 12211 30781 21946 1525 9886 3574 28109 31201 3567 25710 30503 30781 12068 19887 12211 22052 3574 2050 5402 10217 31201 1525 9698 14160 19887 3574 26209 24908 539 33049 9039 32949 8890 29693 3566 3566 11053 30781 26853 3567 3567 0 0 0 0 0 0 0 0 92 0
19640 32771 31526 16576 13354 3574 5087 30781 7902 19037 12211 0 3574 4756 15048 11063 0 15019 16576 2019 29812 2276 22804 13275 2019 24599 12211 30294 6983 26606 1467 3574 18448 8052 16576 23091 32440 11034 16576 3574 1470 6983 1346 31382 13354 3574 11711 10074 28587 5030 19058 16576 2019 16497 6890 12223 30035 6983 1112 18448 30837 11280 24599 2019 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 64 0
7513 19838 3562 32737 15474 3562 1887 15474 0 0 18835 19813 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12 1
30325 3574 30788 12211 25843 11533 30150 8937 11309 8690 12211 14166 2200 3574 15802 0 20424 14166 25336 113 16576 11533 24294 12211 26301 16576 3574 28592 16191 12211 8690 13743 0 517 12211 0 0 23958 3574 31019 19680 13841 15337 12211 23958 30781 28630 3574 8690 12700 11280 12211 23958 24908 20409 7481 8052 6094 4002 30245 3574 1526 9904 27032 31347 24006 12211 14166 0 9910 24908 12211 0 2019 25469 17293 27438 29774 13757 24908 22301 28505 25450 12211 14039 3574 28801 4621 4879 3574 623 9904 23958 14166 18417 4895 113 11114 2018 113 100 1
113 16576 17947 28955 12211 24253 3574 22068 30167 12211 14039 30818 28640 7801 2019 7985 30167 5402 6805 0 12211 27645 33067 30151 3574 11110 12211 10710 4549 22708 4308 24908 25975 12211 26957 0 2019 17942 25575 227 19641 1525 13129 113 15492 23224 3574 21163 15565 23273 29004 12452 13233 27573 12211 12046 2019 302 19367 16576 27914 0 0 113 12211 28035 0 13743 13330 24390 12466 1525 12537 3574 18131 2019 9315 25720 27416 2276 15038 18162 10024 28955 3574 10097 18162 26594 12211 21949 3574 30788 12133 26362 1779 27386 21017 14295 1525 454 100 1
33022 4169 19038 25096 3574 19185 113 25010 0 0 10511 17460 28972 6574 3574 1409 0 10010 3574 33022 129 16186 10511 17460 15182 3574 20235 10511 17460 11226 27150 13166 3562 18835 19038 5391 3574 22195 8052 28892 31948 10960 3574 13367 29338 15048 11030 22185 18621 28776 5205 2019 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 52 0
23439 330 0 0 29655 12211 3574 4211 3574 19650 19640 13757 3562 0 0 8990 330 0 0 18920 12211 31924 6688 31857 15364 3574 19641 30781 18416 28952 9209 12211 118 10710 16912 3562 0 0 27771 330 0 0 10126 30325 3574 15374 4348 0 6356 28420 24193 29526 12211 10523 21872 3571 24383 1580 3574 17536 1525 14745 21674 10710 4952 14871 3574 14590 20306 7695 0 32718 3562 0 0 13260 330 0 0 5847 30325 3574 25951 26995 21163 22787 15535 20889 3574 27914 5391 130 2276 15243 6356 0 16576 3562 0 0 100 1
24908 32568 24044 28952 16576 27914 28955 3574 14160 13543 16582 5536 2019 11711 3527 19675 12211 15474 3574 0 14160 31857 30927 2019 18416 9231 12486 12211 20374 3574 1111 30173 19058 3574 31857 31825 3574 30170 15501 21070 2019 31383 19640 5004 3574 31858 12211 6408 2733 8034 24870 12730 12211 16401 2019 18416 19640 9072 18416 12211 2313 12211 20374 3574 18416 2313 25575 19315 31383 20374 20161 24160 3574 11711 3527 3574 31383 20374 31857 28378 2019 1296 5402 23273 16576 2019 16497 28952 2019 9512 15038 5536 3574 11711 10486 15168 19641 21994 0 2019 100 1
0 7902 5402 29107 16576 15535 15535 15535 0 19634 21017 12211 26505 14160 15129 0 15535 15535 15535 26211 4002 9749 23360 16576 15535 15535 15535 26040 15535 15535 15535 15535 11698 32986 19641 0 22421 15535 15535 15535 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 40 0
28955 17755 3574 1735 18232 19262 12992 12230 3574 18416 30781 7388 19680 19643 16576 12211 3574 28952 9209 3574 16572 22360 2019 19680 19643 6414 12211 2011 27666 2012 3574 13757 32205 3574 14754 11280 12211 22186 7628 1827 17413 3574 19641 30781 31383 12211 4853 2019 33140 113 6047 6414 3310 31383 3574 4654 22360 6580 26147 12211 18696 2019 12306 6414 20539 3574 12680 22360 18624 8051 29384 1146 2019 18046 33188 16582 29384 12211 17311 13222 3574 18416 7453 28961 8014 3574 11711 18416 28961 17658 3574 29384 30781 19893 19643 15073 12211 32171 12211 2019 100 0
28955 12211 30964 14590 28961 4412 29183 29493 6393 17111 29183 11670 12211 19636 23233 28961 4412 29183 25469 1112 16603 14590 16720 28961 9749 32365 23958 12211 33245 1525 11271 29183 29607 4694 8052 12068 32247 26813 29183 12229 6856 3674 330 30326 972 32948 29183 18416 28961 20161 1120 19641 30054 28955 330 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 55 0
28587 26594 16393 14439 20100 8452 12211 11738 3574 20288 2276 2770 9051 29266 3574 27097 12211 0 14648 7902 5827 4308 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 22 1
19083 3561 20034 30173 8356 3574 18416 18016 6154 13757 30827 23410 4879 5213 3566 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 15 1
28587 14745 2018 1580 3574 19636 9052 14160 19683 16576 0 0 6007 5361 26370 5391 785 3574 0 17010 28587 27857 19048 20558 9051 3574 6007 0 0 22897 18323 1447 2019 0 0 32391 17536 24961 19048 9749 18448 3574 24283 6356 7648 26789 2019 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 47 0
24908 18920 1400 665 16167 12211 17293 3574 13518 28952 8393 23504 3574 31266 12211 30781 4477 2019 4654 18896 4289 13841 4822 3574 24908 27376 15243 18416 8052 20077 17493 17317 3574 14842 16949 3574 12081 28961 2276 0 14399 20158 14398 16335 12211 3699 7697 6318 69 2019 11924 8053 27376 12211 14039 3574 21210 23273 3574 1732 30818 17942 22561 3083 2019 17268 12700 28892 9108 16576 26203 19037 23872 3574 14988 31773 3574 33140 1725 24908 0 8053 8052 13841 3574 25944 0 2019 4032 5025 13841 19185 12211 14039 3574 665 0 12211 4822 6988 100 1
29728 31619 6149 5402 113 7317 11738 3574 31482 11924 16576 17657 6541 9761 3574 31224 5402 21141 3574 6356 16191 19640 14451 26154 7192 16076 3567 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 27 0
29302 11364 19059 13652 12211 3574 7898 30781 6356 7961 14954 21752 7340 2019 29302 11401 8328 3574 20384 20034 1460 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 21 0
4592 12211 31382 11030 3574 7961 6356 136 11714 31881 31478 3574 7957 11533 17413 3574 18835 14451 14550 11533 389 3574 14444 20444 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 24 1
18416 24908 0 5233 22185 12211 29183 18956 30781 9668 8904 15168 18416 16108 29183 18416 29123 4351 28845 11709 11731 30486 21200 3574 4351 32986 8052 13757 11711 16497 25138 18448 3006 30326 20837 6356 16060 11231 13757 18448 11731 29173 3576 18835 27924 11711 11533 11225 3574 17386 15934 7288 0 26216 12211 1542 3574 24908 12511 18416 16060 11231 32842 18448 11731 29173 3574 18956 9668 31387 755 32986 18416 28972 18855 30781 18448 3006 30326 20837 30781 8052 13757 15048 18448 11731 29173 12211 3574 19640 18584 18416 32986 25710 18416 2276 29173 12211 22052 24908 100 0
models/contentunderstanding/textcnn_pretrain/data/train/train.txt
0 → 100644
浏览文件 @
9a12e113
此差异已折叠。
点击以展开。
models/contentunderstanding/textcnn_pretrain/finetune_startup.py
0 → 100644
浏览文件 @
9a12e113
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
print_function
import
warnings
import
os
import
paddle.fluid
as
fluid
import
paddle.fluid.core
as
core
from
paddlerec.core.utils
import
envs
from
paddlerec.core.trainers.framework.startup
import
StartupBase
from
paddlerec.core.trainer
import
EngineMode
__all__
=
[
"Startup"
]
class
Startup
(
StartupBase
):
"""R
"""
def
__init__
(
self
,
context
):
self
.
op_name_scope
=
"op_namescope"
self
.
clip_op_name_scope
=
"@CLIP"
self
.
op_role_var_attr_name
=
core
.
op_proto_and_checker_maker
.
kOpRoleVarAttrName
(
)
print
(
"Running FineTuningStartup."
)
def
_is_opt_role_op
(
self
,
op
):
# NOTE: depend on oprole to find out whether this op is for
# optimize
op_maker
=
core
.
op_proto_and_checker_maker
optimize_role
=
core
.
op_proto_and_checker_maker
.
OpRole
.
Optimize
if
op_maker
.
kOpRoleAttrName
()
in
op
.
attr_names
and
\
int
(
op
.
all_attrs
()[
op_maker
.
kOpRoleAttrName
()])
==
int
(
optimize_role
):
return
True
return
False
def
_get_params_grads
(
self
,
program
):
"""
Get optimizer operators, parameters and gradients from origin_program
Returns:
opt_ops (list): optimize operators.
params_grads (dict): parameter->gradient.
"""
block
=
program
.
global_block
()
params_grads
=
[]
# tmp set to dedup
optimize_params
=
set
()
origin_var_dict
=
program
.
global_block
().
vars
for
op
in
block
.
ops
:
if
self
.
_is_opt_role_op
(
op
):
# Todo(chengmo): Whether clip related op belongs to Optimize guard should be discussed
# delete clip op from opt_ops when run in Parameter Server mode
if
self
.
op_name_scope
in
op
.
all_attrs
(
)
and
self
.
clip_op_name_scope
in
op
.
attr
(
self
.
op_name_scope
):
op
.
_set_attr
(
"op_role"
,
int
(
core
.
op_proto_and_checker_maker
.
OpRole
.
Backward
))
continue
if
op
.
attr
(
self
.
op_role_var_attr_name
):
param_name
=
op
.
attr
(
self
.
op_role_var_attr_name
)[
0
]
grad_name
=
op
.
attr
(
self
.
op_role_var_attr_name
)[
1
]
if
not
param_name
in
optimize_params
:
optimize_params
.
add
(
param_name
)
params_grads
.
append
([
origin_var_dict
[
param_name
],
origin_var_dict
[
grad_name
]
])
return
params_grads
@
staticmethod
def
is_persistable
(
var
):
"""
Check whether the given variable is persistable.
Args:
var(Variable): The variable to be checked.
Returns:
bool: True if the given `var` is persistable
False if not.
Examples:
.. code-block:: python
import paddle.fluid as fluid
param = fluid.default_main_program().global_block().var('fc.b')
res = fluid.io.is_persistable(param)
"""
if
var
.
desc
.
type
()
==
core
.
VarDesc
.
VarType
.
FEED_MINIBATCH
or
\
var
.
desc
.
type
()
==
core
.
VarDesc
.
VarType
.
FETCH_LIST
or
\
var
.
desc
.
type
()
==
core
.
VarDesc
.
VarType
.
READER
:
return
False
return
var
.
persistable
def
load
(
self
,
context
,
is_fleet
=
False
,
main_program
=
None
):
dirname
=
envs
.
get_global_env
(
"runner."
+
context
[
"runner_name"
]
+
".init_pretraining_model_path"
,
""
)
hotstart_dirname
=
envs
.
get_global_env
(
"runner."
+
context
[
"runner_name"
]
+
".init_model_path"
,
""
)
def
existed_params
(
var
):
if
not
isinstance
(
var
,
fluid
.
framework
.
Parameter
):
return
False
if
os
.
path
.
exists
(
os
.
path
.
join
(
dirname
,
var
.
name
)):
print
(
"INIT %s"
%
var
.
name
)
return
True
else
:
#print("SKIP %s" % var.name)
return
False
if
hotstart_dirname
!=
""
:
#If init_model_path exists, hot start is first choice
print
(
"going to load "
,
hotstart_dirname
)
fluid
.
io
.
load_persistables
(
context
[
"exe"
],
hotstart_dirname
,
main_program
=
main_program
)
print
(
"load from {} success"
.
format
(
hotstart_dirname
))
elif
dirname
!=
""
:
#If init_pretraining_model_path exists ,pretrained model load parameters
print
(
"going to load "
,
dirname
)
fluid
.
io
.
load_vars
(
context
[
"exe"
],
dirname
,
main_program
=
main_program
,
predicate
=
existed_params
)
print
(
"load from {} success"
.
format
(
dirname
))
else
:
#If both of the above are empty, cold start model
return
def
startup
(
self
,
context
):
for
model_dict
in
context
[
"phases"
]:
with
fluid
.
scope_guard
(
context
[
"model"
][
model_dict
[
"name"
]][
"scope"
]):
train_prog
=
context
[
"model"
][
model_dict
[
"name"
]][
"main_program"
]
startup_prog
=
context
[
"model"
][
model_dict
[
"name"
]][
"startup_program"
]
with
fluid
.
program_guard
(
train_prog
,
startup_prog
):
context
[
"exe"
].
run
(
startup_prog
)
self
.
load
(
context
,
main_program
=
train_prog
)
context
[
"status"
]
=
"train_pass"
models/contentunderstanding/textcnn_pretrain/model.py
0 → 100644
浏览文件 @
9a12e113
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import
paddle.fluid
as
fluid
from
paddlerec.core.utils
import
envs
from
paddlerec.core.model
import
ModelBase
from
basemodel
import
embedding
class
Model
(
ModelBase
):
def
__init__
(
self
,
config
):
ModelBase
.
__init__
(
self
,
config
)
self
.
dict_size
=
2000001
self
.
max_len
=
100
self
.
cnn_dim
=
128
self
.
cnn_filter_size1
=
1
self
.
cnn_filter_size2
=
2
self
.
cnn_filter_size3
=
3
self
.
emb_dim
=
128
self
.
hid_dim
=
96
self
.
class_dim
=
2
self
.
is_sparse
=
True
def
input_data
(
self
,
is_infer
=
False
,
**
kwargs
):
data
=
fluid
.
data
(
name
=
"input"
,
shape
=
[
None
,
self
.
max_len
,
1
],
dtype
=
'int64'
)
seq_len
=
fluid
.
data
(
name
=
"seq_len"
,
shape
=
[
None
],
dtype
=
'int64'
)
label
=
fluid
.
data
(
name
=
"label"
,
shape
=
[
None
,
1
],
dtype
=
'int64'
)
return
[
data
,
seq_len
,
label
]
def
net
(
self
,
input
,
is_infer
=
False
):
""" network definition """
self
.
data
=
input
[
0
]
self
.
seq_len
=
input
[
1
]
self
.
label
=
input
[
2
]
# embedding layer
emb
=
embedding
(
self
.
data
,
self
.
dict_size
,
self
.
emb_dim
,
self
.
is_sparse
)
emb
=
fluid
.
layers
.
sequence_unpad
(
emb
,
length
=
self
.
seq_len
)
# convolution layer
conv1
=
fluid
.
nets
.
sequence_conv_pool
(
input
=
emb
,
num_filters
=
self
.
cnn_dim
,
filter_size
=
self
.
cnn_filter_size1
,
act
=
"tanh"
,
pool_type
=
"max"
)
conv2
=
fluid
.
nets
.
sequence_conv_pool
(
input
=
emb
,
num_filters
=
self
.
cnn_dim
,
filter_size
=
self
.
cnn_filter_size2
,
act
=
"tanh"
,
pool_type
=
"max"
)
conv3
=
fluid
.
nets
.
sequence_conv_pool
(
input
=
emb
,
num_filters
=
self
.
cnn_dim
,
filter_size
=
self
.
cnn_filter_size3
,
act
=
"tanh"
,
pool_type
=
"max"
)
convs_out
=
fluid
.
layers
.
concat
(
input
=
[
conv1
,
conv2
,
conv3
],
axis
=
1
)
# full connect layer
fc_1
=
fluid
.
layers
.
fc
(
input
=
convs_out
,
size
=
self
.
hid_dim
,
act
=
"tanh"
)
# softmax layer
prediction
=
fluid
.
layers
.
fc
(
input
=
[
fc_1
],
size
=
self
.
class_dim
,
act
=
"softmax"
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
self
.
label
)
avg_cost
=
fluid
.
layers
.
mean
(
x
=
cost
)
acc
=
fluid
.
layers
.
accuracy
(
input
=
prediction
,
label
=
self
.
label
)
self
.
_cost
=
avg_cost
if
is_infer
:
self
.
_infer_results
[
"acc"
]
=
acc
self
.
_infer_results
[
"loss"
]
=
avg_cost
else
:
self
.
_metrics
[
"acc"
]
=
acc
self
.
_metrics
[
"loss"
]
=
avg_cost
models/contentunderstanding/textcnn_pretrain/reader.py
0 → 100644
浏览文件 @
9a12e113
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import
sys
from
paddlerec.core.reader
import
ReaderBase
class
Reader
(
ReaderBase
):
def
init
(
self
):
pass
def
_process_line
(
self
,
l
):
l
=
l
.
strip
().
split
()
data
=
l
[
0
:
100
]
seq_len
=
l
[
100
:
101
]
label
=
l
[
101
:]
return
data
,
label
,
seq_len
def
generate_sample
(
self
,
line
):
def
data_iter
():
data
,
label
,
seq_len
=
self
.
_process_line
(
line
)
if
data
is
None
:
yield
None
return
data
=
[
int
(
i
)
for
i
in
data
]
label
=
[
int
(
i
)
for
i
in
label
]
seq_len
=
[
int
(
i
)
for
i
in
seq_len
]
yield
[(
'data'
,
data
),
(
'seq_len'
,
seq_len
),
(
'label'
,
label
)]
return
data_iter
models/contentunderstanding/textcnn_pretrain/readme.md
0 → 100644
浏览文件 @
9a12e113
# 使用文本分类模型作为预训练模型对textcnn模型进行fine-tuning
以下是本例的简要目录结构及说明:
```
├── data #样例数据
├── train
├── train.txt #训练数据样例
├── test
├── test.txt #测试数据样例
├── preprocess.py #数据处理程序
├── __init__.py
├── README.md #文档
├── model.py #模型文件
├── basemodel.py #预训练模型
├── config.yaml #配置文件
├── reader.py #读取程序
├── finetune_startup.py #加载参数
```
注:在阅读该示例前,建议您先了解以下内容:
[
paddlerec入门教程
](
https://github.com/PaddlePaddle/PaddleRec/blob/master/README.md
)
## 内容
-
[
模型简介
](
#模型简介
)
-
[
数据准备
](
#数据准备
)
-
[
运行环境
](
#运行环境
)
-
[
快速开始
](
#快速开始
)
-
[
效果复现
](
#效果复现
)
-
[
进阶使用
](
#进阶使用
)
-
[
FAQ
](
#FAQ
)
## 模型简介
情感倾向分析(Sentiment Classification,简称Senta)针对带有主观描述的中文文本,可自动判断该文本的情感极性类别并给出相应的置信度。情感类型分为积极、消极。在本文中,我们提供了一个使用大规模的对文章数据进行多分类的textCNN模型(2个卷积核的cnn模型)作为预训练模型。本文会使用这个预训练模型对contentunderstanding目录下的textcnn模型(3个卷积核的cnn模型)进行fine-tuning。本文将预训练模型中的embedding层迁移到了contentunderstanding目录下的textcnn模型中,依然进行情感分析的二分类任务。最终获得了模型准确率上的基本持平以及更快速的收敛
Yoon Kim在论文
[
EMNLP 2014
][
Convolutional neural networks for sentence classication
]
(https://www.aclweb.org/anthology/D14-1181.pdf)提出了TextCNN并给出基本的结构。将卷积神经网络CNN应用到文本分类任务,利用多个不同size的kernel来提取句子中的关键信息(类似于多窗口大小的ngram),从而能够更好地捕捉局部相关性。模型的主体结构如图所示:
<p
align=
"center"
>
<img
align=
"center"
src=
"../../../doc/imgs/cnn-ckim2014.png"
>
<p>
## 数据准备
情感倾向分析(Sentiment Classification,简称Senta)针对带有主观描述的中文文本,可自动判断该文本的情感极性类别并给出相应的置信度。情感类型分为积极、消极。情感倾向分析能够帮助企业理解用户消费习惯、分析热点话题和危机舆情监控,为企业提供有利的决策支持。
情感是人类的一种高级智能行为,为了识别文本的情感倾向,需要深入的语义建模。另外,不同领域(如餐饮、体育)在情感的表达各不相同,因而需要有大规模覆盖各个领域的数据进行模型训练。为此,我们通过基于深度学习的语义模型和大规模数据挖掘解决上述两个问题。效果上,我们和contentunderstanding目录下的textcnn模型一样基于开源情感倾向分类数据集ChnSentiCorp进行评测。
您可以直接执行以下命令获取我们的预训练模型(basemodel.py,pretrain_model_params)以及对应的字典(word_dict.txt):
```
wget https://paddlerec.bj.bcebos.com/textcnn_pretrain%2Fpretrain_model.tar.gz
tar -zxvf textcnn_pretrain%2Fpretrain_model.tar.gz
```
您可以直接执行以下命令下载我们分词完毕后的数据集,文件解压之后,senta_data目录下会存在训练数据(train.tsv)、开发集数据(dev.tsv)、测试集数据(test.tsv)以及对应的词典(word_dict.txt):
```
wget https://baidu-nlp.bj.bcebos.com/sentiment_classification-dataset-1.0.0.tar.gz
tar -zxvf sentiment_classification-dataset-1.0.0.tar.gz
```
数据格式为一句中文的评价语句,和一个代表情感信息的标签。两者之间用/t分隔,中文的评价语句已经分词,词之间用空格分隔。
```
15.4寸 笔记本 的 键盘 确实 爽 , 基本 跟 台式机 差不多 了 , 蛮 喜欢 数字 小 键盘 , 输 数字 特 方便 , 样子 也 很 美观 , 做工 也 相当 不错 1
跟 心灵 鸡汤 没 什么 本质 区别 嘛 , 至少 我 不 喜欢 这样 读 经典 , 把 经典 都 解读 成 这样 有点 去 中国 化 的 味道 了 0
```
## 运行环境
PaddlePaddle>=1.7.2
python 2.7/3.5/3.6/3.7
PaddleRec >=0.1
os : windows/linux/macos
## 快速开始
本文需要下载模型的参数文件和finetune的数据集才可以体现出finetune的效果,所以暂不提供快速一键运行。若想体验finetune的效果,请按照下面【效果复现】模块的步骤依次执行。
## 效果复现
在本模块,我们希望用户可以理解如何使用预训练模型来对自己的模型进行fine-tuning。
1.
确认您当前所在目录为PaddleRec/models/contentunderstanding/textcnn_pretrain
2.
下载并解压数据集,命令如下。解压后您可以看到出现senta_data目录
```
wget https://baidu-nlp.bj.bcebos.com/sentiment_classification-dataset-1.0.0.tar.gz
tar -zxvf sentiment_classification-dataset-1.0.0.tar.gz
```
3.
下载并解压预训练模型,命令如下。
```
wget https://paddlerec.bj.bcebos.com/textcnn_pretrain%2Fpretrain_model.tar.gz
tar -zxvf textcnn_pretrain%2Fpretrain_model.tar.gz
```
4.
本文提供了快速将数据集中的汉字数据处理为可训练格式数据的脚本。在您下载预训练模型后,将word_dict.txt复制到senta_data文件中。您在解压数据集后,将preprocess.py复制到senta_data文件中。
执行preprocess.py,即可将数据集中提供的dev.tsv,test.tsv,train.tsv按照词典提供的对应关系转化为可直接训练的txt文件.命令如下:
```
rm -f senta_data/word_dict.txt
cp pretrain_model/word_dict.txt senta_data
cp data/preprocess.py senta_data/
cd senta_data
python3 preprocess.py
mkdir train
mv train.txt train
mkdir test
mv test.txt test
cd ..
```
5.
打开文件config.yaml,更改其中的参数
将workspace改为您当前的绝对路径。(可用pwd命令获取绝对路径)
6.
执行命令,开始训练:
```
python -m paddlerec.run -m ./config.yaml
```
7.
运行结果:
```
PaddleRec: Runner infer_runner Begin
Executor Mode: infer
processor_register begin
Running SingleInstance.
Running SingleNetwork.
Running SingleInferStartup.
Running SingleInferRunner.
load persistables from increment/3
batch: 1, acc: [0.8828125], loss: [0.35940486]
batch: 2, acc: [0.91796875], loss: [0.24300358]
batch: 3, acc: [0.91015625], loss: [0.2490797]
Infer phase_infer of epoch increment/3 done, use time: 0.78388094902, global metrics: acc=[0.91015625], loss=[0.2490797]
PaddleRec Finish
```
## 进阶使用
在观察完model.py和config.yaml两个文件后,相信大家会发现和之前的模型相比有些改变。本章将详细解析这些改动,方便大家理解并灵活应用到自己的程序中.
1.
在model.py中,大家会发现在构建embedding层的时候,直接传参使用了basemodel.py中的embeding层。
这是因为本文使用了预训练模型(basemodel.py)中embedding层,经过大量语料的训练后的embedding层中本身已经蕴含了大量的先验知识。而这些先验知识对于下游任务,尤其是小数据集来讲,是非常有帮助的。
2.
在config.yaml中,大家会发现在train_runner中多了startup_class_path和init_pretraining_model_path两个参数。
参数startup_class_path的作用是自定义训练的流程。我们将在自定义的finetune_startup.py文件中将训练好的参数加载入模型当中。
参数init_pretraining_model_path的作用就是指明加载参数的路径。若路径下的参数文件和模型中的var具有相同的名字,就会将参数加载进模型当中。
在您设置init_model_path参数时,程序会优先试图按您设置的路径热启动。当没有init_model_path参数,无法热启动时,程序会试图加载init_pretraining_model_path路径下的参数,进行finetune训练。
只有在两者均为空的情况下,模型会冷启动从头开始训练。
若您希望进一步了解自定义流程的操作,可以参考以下内容:
[
如何添加自定义流程
](
https://github.com/PaddlePaddle/PaddleRec/blob/master/doc/trainer_develop.md#%E5%A6%82%E4%BD%95%E6%B7%BB%E5%8A%A0%E8%87%AA%E5%AE%9A%E4%B9%89%E6%B5%81%E7%A8%8B
)
3.
在basemodel.py中,我们准备了embedding,multi_convs,full_connect三个模块供您在有需要时直接import使用。
相关参数可以从本文提供的预训练模型下载链接里的pretrain_model/pretrain_model_params中找到。
## FAQ
tools/build_script.sh
浏览文件 @
9a12e113
...
...
@@ -49,7 +49,7 @@ function model_test() {
root_dir
=
`
pwd
`
all_model
=
$(
find
${
root_dir
}
-name
config.yaml
)
special_models
=(
"demo"
"pnn"
"fgcnn"
"gru4rec"
"tagspace"
)
special_models
=(
"demo"
"pnn"
"fgcnn"
"gru4rec"
"tagspace"
"textcnn_pretrain"
)
for
model
in
${
all_model
}
do
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录