提交 fb1a4c95 编写于 作者: Y yinhaofeng

change dssm and match readme

上级 d4a280b5
...@@ -17,14 +17,14 @@ workspace: "models/match/dssm" ...@@ -17,14 +17,14 @@ workspace: "models/match/dssm"
dataset: dataset:
- name: dataset_train - name: dataset_train
batch_size: 4 batch_size: 8
type: QueueDataset type: DataLoader # or QueueDataset
data_path: "{workspace}/data/train" data_path: "{workspace}/data/train"
data_converter: "{workspace}/synthetic_reader.py" data_converter: "{workspace}/synthetic_reader.py"
- name: dataset_infer - name: dataset_infer
batch_size: 1 batch_size: 1
type: QueueDataset type: DataLoader # or QueueDataset
data_path: "{workspace}/data/train" data_path: "{workspace}/data/test"
data_converter: "{workspace}/synthetic_evaluate_reader.py" data_converter: "{workspace}/synthetic_evaluate_reader.py"
hyper_parameters: hyper_parameters:
...@@ -32,12 +32,12 @@ hyper_parameters: ...@@ -32,12 +32,12 @@ hyper_parameters:
class: sgd class: sgd
learning_rate: 0.01 learning_rate: 0.01
strategy: async strategy: async
trigram_d: 1000 trigram_d: 1439
neg_num: 4 neg_num: 1
fc_sizes: [300, 300, 128] fc_sizes: [300, 300, 128]
fc_acts: ['tanh', 'tanh', 'tanh'] fc_acts: ['tanh', 'tanh', 'tanh']
mode: train_runner mode: [train_runner,infer_runner]
# config of each runner. # config of each runner.
# runner is a kind of paddle training class, which wraps the train/infer process. # runner is a kind of paddle training class, which wraps the train/infer process.
runner: runner:
...@@ -47,20 +47,22 @@ runner: ...@@ -47,20 +47,22 @@ runner:
epochs: 4 epochs: 4
# device to run training or infer # device to run training or infer
device: cpu device: cpu
save_checkpoint_interval: 2 # save model interval of epochs save_checkpoint_interval: 1 # save model interval of epochs
save_inference_interval: 4 # save inference save_inference_interval: 1 # save inference
save_checkpoint_path: "increment" # save checkpoint path save_checkpoint_path: "increment" # save checkpoint path
save_inference_path: "inference" # save inference path save_inference_path: "inference" # save inference path
save_inference_feed_varnames: ["query", "doc_pos"] # feed vars of save inference save_inference_feed_varnames: ["query", "doc_pos"] # feed vars of save inference
save_inference_fetch_varnames: ["cos_sim_0.tmp_0"] # fetch vars of save inference save_inference_fetch_varnames: ["cos_sim_0.tmp_0"] # fetch vars of save inference
init_model_path: "" # load model path init_model_path: "" # load model path
print_interval: 2 print_interval: 2
phases: phase1
- name: infer_runner - name: infer_runner
class: infer class: infer
# device to run training or infer # device to run training or infer
device: cpu device: cpu
print_interval: 1 print_interval: 1
init_model_path: "increment/2" # load model path init_model_path: "increment/3" # load model path
phases: phase2
# runner will run all the phase in each epoch # runner will run all the phase in each epoch
phase: phase:
...@@ -68,7 +70,7 @@ phase: ...@@ -68,7 +70,7 @@ phase:
model: "{workspace}/model.py" # user-defined model model: "{workspace}/model.py" # user-defined model
dataset_name: dataset_train # select dataset by name dataset_name: dataset_train # select dataset by name
thread_num: 1 thread_num: 1
#- name: phase2 - name: phase2
# model: "{workspace}/model.py" # user-defined model model: "{workspace}/model.py" # user-defined model
# dataset_name: dataset_infer # select dataset by name dataset_name: dataset_infer # select dataset by name
# thread_num: 1 thread_num: 1
#encoding=utf-8
import os
import sys
import numpy as np
import random
f = open("./zhidao", "r")
lines = f.readlines()
f.close()
#建立字典
word_dict = {}
for line in lines:
line = line.strip().split("\t")
text = line[0].split(" ") + line[1].split(" ")
for word in text:
if word in word_dict:
word_dict[word] = word_dict[word] + 1
else:
word_dict[word] = 1
word_list = word_dict.items()
word_list = sorted(word_dict.items(), key=lambda item: item[1], reverse=True)
word_list_ids = range(1, len(word_list) + 1)
word_dict = dict(zip([x[0] for x in word_list], word_list_ids))
f = open("./zhidao", "r")
lines = f.readlines()
f.close()
#划分训练集和测试集
lines = [line.strip().split("\t") for line in lines]
random.shuffle(lines)
train_set = lines[:900]
test_set = lines[900:]
#建立以query为key,以负例为value的字典
neg_dict = {}
for line in train_set:
if line[2] == "0":
if line[0] in neg_dict:
neg_dict[line[0]].append(line[1])
else:
neg_dict[line[0]] = [line[1]]
#建立以query为key,以正例为value的字典
pos_dict = {}
for line in train_set:
if line[2] == "1":
if line[0] in pos_dict:
pos_dict[line[0]].append(line[1])
else:
pos_dict[line[0]] = [line[1]]
#训练集整理为query,pos,neg的格式
f = open("train.txt", "w")
for query in pos_dict.keys():
for pos in pos_dict[query]:
if query not in neg_dict:
continue
for neg in neg_dict[query]:
f.write(str(query) + "\t" + str(pos) + "\t" + str(neg) + "\n")
f.close()
f = open("train.txt", "r")
lines = f.readlines()
f.close()
#训练集中的query,pos,neg转化为词袋
f = open("train.txt", "w")
for line in lines:
line = line.strip().split("\t")
query = line[0].strip().split(" ")
pos = line[1].strip().split(" ")
neg = line[2].strip().split(" ")
query_token = [0] * (len(word_dict) + 1)
for word in query:
query_token[word_dict[word]] = 1
pos_token = [0] * (len(word_dict) + 1)
for word in pos:
pos_token[word_dict[word]] = 1
neg_token = [0] * (len(word_dict) + 1)
for word in neg:
neg_token[word_dict[word]] = 1
f.write(','.join([str(x) for x in query_token]) + "\t" + ','.join([
str(x) for x in pos_token
]) + "\t" + ','.join([str(x) for x in neg_token]) + "\n")
f.close()
#测试集中的query和pos转化为词袋
f = open("test.txt", "w")
fa = open("label.txt", "w")
for line in test_set:
query = line[0].strip().split(" ")
pos = line[1].strip().split(" ")
label = line[2]
query_token = [0] * (len(word_dict) + 1)
for word in query:
query_token[word_dict[word]] = 1
pos_token = [0] * (len(word_dict) + 1)
for word in pos:
pos_token[word_dict[word]] = 1
f.write(','.join([str(x) for x in query_token]) + "\t" + ','.join(
[str(x) for x in pos_token]) + "\n")
fa.write(label + "\n")
f.close()
fa.close()
此差异已折叠。
此差异已折叠。
此差异已折叠。
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
import numpy as np
import sklearn.metrics
label = []
filename = './data/label.txt'
f = open(filename, "r")
f.readline()
num = 0
for line in f.readlines():
num = num + 1
line = line.strip()
label.append(float(line))
f.close()
print(num)
filename = './result.txt'
sim = []
for line in open(filename):
line = line.strip().split(",")
line[1] = line[1].split(":")
line = line[1][1].strip(" ")
line = line.strip("[")
line = line.strip("]")
sim.append(float(line))
auc = sklearn.metrics.roc_auc_score(label, sim)
print("auc = ", auc)
...@@ -73,6 +73,7 @@ class Model(ModelBase): ...@@ -73,6 +73,7 @@ class Model(ModelBase):
query_fc = fc(inputs[0], self.hidden_layers, self.hidden_acts, query_fc = fc(inputs[0], self.hidden_layers, self.hidden_acts,
['query_l1', 'query_l2', 'query_l3']) ['query_l1', 'query_l2', 'query_l3'])
doc_pos_fc = fc(inputs[1], self.hidden_layers, self.hidden_acts, doc_pos_fc = fc(inputs[1], self.hidden_layers, self.hidden_acts,
['doc_pos_l1', 'doc_pos_l2', 'doc_pos_l3']) ['doc_pos_l1', 'doc_pos_l2', 'doc_pos_l3'])
R_Q_D_p = fluid.layers.cos_sim(query_fc, doc_pos_fc) R_Q_D_p = fluid.layers.cos_sim(query_fc, doc_pos_fc)
...@@ -93,7 +94,7 @@ class Model(ModelBase): ...@@ -93,7 +94,7 @@ class Model(ModelBase):
prob = fluid.layers.softmax(concat_Rs, axis=1) prob = fluid.layers.softmax(concat_Rs, axis=1)
hit_prob = fluid.layers.slice( hit_prob = fluid.layers.slice(
prob, axes=[0, 1], starts=[0, 0], ends=[4, 1]) prob, axes=[0, 1], starts=[0, 0], ends=[128, 1])
loss = -fluid.layers.reduce_sum(fluid.layers.log(hit_prob)) loss = -fluid.layers.reduce_sum(fluid.layers.log(hit_prob))
avg_cost = fluid.layers.mean(x=loss) avg_cost = fluid.layers.mean(x=loss)
self._cost = avg_cost self._cost = avg_cost
......
# DSSM文本匹配模型
以下是本例的简要目录结构及说明:
```
├── data #样例数据
├── train
├── train.txt #训练数据样例
├── test
├── test.txt #测试数据样例
├── preprocess.py #数据处理程序
├── __init__.py
├── README.md #文档
├── model.py #模型文件
├── config.yaml #配置文件
├── synthetic_reader.py #读取训练集的程序
├── synthetic_evaluate_reader.py #读取测试集的程序
├── eval.py #评价脚本
```
注:在阅读该示例前,建议您先了解以下内容:
[paddlerec入门教程](https://github.com/PaddlePaddle/PaddleRec/blob/master/README.md)
## 内容
- [模型简介](#模型简介)
- [数据准备](#数据准备)
- [运行环境](#运行环境)
- [快速开始](#快速开始)
- [效果复现](#效果复现)
- [进阶使用](#进阶使用)
- [FAQ](#FAQ)
## 模型简介
DSSM是Deep Structured Semantic Model的缩写,即我们通常说的基于深度网络的语义模型,其核心思想是将query和doc映射到到共同维度的语义空间中,通过最大化query和doc语义向量之间的余弦相似度,从而训练得到隐含语义模型,达到检索的目的。DSSM有很广泛的应用,比如:搜索引擎检索,广告相关性,问答系统,机器翻译等。
DSSM 的输入采用 BOW(Bag of words)的方式,相当于把字向量的位置信息抛弃了,整个句子里的词都放在一个袋子里了。将一个句子用这种方式转化为一个向量输入DNN中。
Query 和 Doc 的语义相似性可以用这两个向量的 cosine 距离表示,然后通过softmax 函数选出与Query语义最相似的样本 Doc 。
模型的具体细节可以阅读论文[DSSM](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/cikm2013_DSSM_fullversion.pdf):
<p align="center">
<img align="center" src="../../../doc/imgs/dssm.png">
<p>
## 数据准备
我们公开了自建的测试集,包括百度知道、ECOM、QQSIM、UNICOM 四个数据集。这里我们选取百度知道数据集来进行训练。执行以下命令可以获取上述数据集。
```
wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/simnet_dataset-1.0.0.tar.gz
tar xzf simnet_dataset-1.0.0.tar.gz
rm simnet_dataset-1.0.0.tar.gz
```
## 运行环境
PaddlePaddle>=1.7.2
python 2.7/3.5/3.6/3.7
PaddleRec >=0.1
os : windows/linux/macos
## 快速开始
本文提供了样例数据可以供您快速体验,直接执行下面的命令即可启动训练:
```
python -m paddlerec.run -m models/match/dssm/config.yaml
```
## 效果复现
为了方便使用者能够快速的跑通每一个模型,我们在每个模型下都提供了样例数据。如果需要复现readme中的效果,请按如下步骤依次操作即可。
1. 确认您当前所在目录为PaddleRec/models/match/dssm
2. 在data目录下载并解压数据集,命令如下:
```
cd data
wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/simnet_dataset-1.0.0.tar.gz
tar xzf simnet_dataset-1.0.0.tar.gz
rm simnet_dataset-1.0.0.tar.gz
```
3. 本文提供了快速将数据集中的汉字数据处理为可训练格式数据的脚本,您在解压数据集后,可以看见目录中存在一个名为zhidao的文件。然后能可以在python3环境下运行我们提供的preprocess.py文件。即可生成可以直接用于训练的数据目录test.txt,train.txt和label.txt。将其放入train和test目录下以备训练时调用。命令如下:
```
mv data/zhidao ./
rm -rf data
python3 preprocess.py
rm -f ./train/train.txt
mv train.txt ./train
rm -f ./test/test.txt
mv test.txt test
cd ..
```
经过预处理的格式:
训练集为三个稀疏的BOW方式的向量:query,pos,neg
测试集为两个稀疏的BOW方式的向量:query,pos
label.txt中对应的测试集中的标签
4. 退回tagspace目录中,打开文件config.yaml,更改其中的参数
将workspace改为您当前的绝对路径。(可用pwd命令获取绝对路径)
将dataset_train中的batch_size从8改为128
5. 执行脚本,开始训练.脚本会运行python -m paddlerec.run -m ./config.yaml启动训练,并将结果输出到result文件中。然后启动评价脚本eval.py计算auc:
```
sh run.sh
```
## 进阶使用
## FAQ
#!/bin/bash
echo "................run................."
python -m paddlerec.run -m ./config.yaml >result1.txt
grep -i "query_doc_sim" ./result1.txt >./result2.txt
sed '$d' result2.txt >result.txt
rm -f result1.txt
rm -f result2.txt
python eval.py
# 匹配模型库 # 匹配模型库
## 简介 ## 简介
我们提供了常见的匹配任务中使用的模型算法的PaddleRec实现, 单机训练&预测效果指标以及分布式训练&预测性能指标等。实现的模型包括 [DSSM](http://gitlab.baidu.com/tangwei12/paddlerec/tree/develop/models/match/dssm)[MultiView-Simnet](http://gitlab.baidu.com/tangwei12/paddlerec/tree/develop/models/match/multiview-simnet) 我们提供了常见的匹配任务中使用的模型算法的PaddleRec实现, 单机训练&预测效果指标以及分布式训练&预测性能指标等。实现的模型包括 [DSSM](http://gitlab.baidu.com/tangwei12/paddlerec/tree/develop/models/match/dssm)[MultiView-Simnet](http://gitlab.baidu.com/tangwei12/paddlerec/tree/develop/models/match/multiview-simnet)[match-pyramid](https://github.com/PaddlePaddle/PaddleRec/tree/master/models/match/match-pyramid)
模型算法库在持续添加中,欢迎关注。 模型算法库在持续添加中,欢迎关注。
...@@ -18,6 +18,8 @@ ...@@ -18,6 +18,8 @@
| :------------------: | :--------------------: | :---------: | | :------------------: | :--------------------: | :---------: |
| DSSM | Deep Structured Semantic Models | [CIKM 2013][Learning Deep Structured Semantic Models for Web Search using Clickthrough Data](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/cikm2013_DSSM_fullversion.pdf) | | DSSM | Deep Structured Semantic Models | [CIKM 2013][Learning Deep Structured Semantic Models for Web Search using Clickthrough Data](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/cikm2013_DSSM_fullversion.pdf) |
| MultiView-Simnet | Multi-view Simnet for Personalized recommendation | [WWW 2015][A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/frp1159-songA.pdf) | | MultiView-Simnet | Multi-view Simnet for Personalized recommendation | [WWW 2015][A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/frp1159-songA.pdf) |
| match-pyramid | Text Matching as Image Recognition | [arXiv W2016][Text Matching as Image Recognition](https://arxiv.org/pdf/1602.06359.pdf) |
下面是每个模型的简介(注:图片引用自链接中的论文) 下面是每个模型的简介(注:图片引用自链接中的论文)
...@@ -31,24 +33,26 @@ ...@@ -31,24 +33,26 @@
<img align="center" src="../../doc/imgs/multiview-simnet.png"> <img align="center" src="../../doc/imgs/multiview-simnet.png">
<p> <p>
## 使用教程(快速开始) [match-pyramid](https://arxiv.org/pdf/1602.06359.pdf):
### 训练 <p align="center">
```shell <img align="center" src="../../doc/imgs/match-pyramid.png">
git clone https://github.com/PaddlePaddle/PaddleRec.git paddle-rec <p>
cd paddle-rec
## 使用教程(快速开始)
### 训练&预测
每个模型都提供了样例数据可以供您快速体验,在paddlerec目录下直接执行下面的命令即可启动训练:
```
python -m paddlerec.run -m models/match/dssm/config.yaml # dssm python -m paddlerec.run -m models/match/dssm/config.yaml # dssm
python -m paddlerec.run -m models/match/multiview-simnet/config.yaml # multiview-simnet python -m paddlerec.run -m models/match/multiview-simnet/config.yaml # multiview-simnet
python -m paddlerec.run -m models/contentunderstanding/match-pyramid/config.yaml #match-pyramid
``` ```
### 效果复现
每个模型下的readme中都有详细的效果复现的教程,您可以进入模型的目录中详细查看
### 预测 ### 模型效果 (测试)
```shell
# 修改对应模型的config.yaml, workspace配置为当前目录的绝对路径
# 修改对应模型的config.yaml,mode配置infer_runner
# 示例: mode: train_runner -> mode: infer_runner
# infer_runner中 class配置为 class: infer
# 修改phase阶段为infer的配置,参照config注释
# 修改完config.yaml后 执行: | 数据集 | 模型 | auc | map |
python -m paddlerec.run -m ./config.yaml # 以dssm为例 | :------------------: | :--------------------: | :---------: |:---------: |
``` | zhidao | DSSM | 0.55 | -- |
| Letor07 | match-pyramid | -- | 0.42 |
| zhidao | multiview-simnet | 0.59 | -- |
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册