未验证 提交 67f35e98 编写于 作者: J Jiawei Wang 提交者: GitHub

Update README.md

上级 1387e923
## 带稀疏参数服务器的CTR预测服务 ## Criteo CTR with Sparse Parameter Server
([简体中文](README_CN.md)|English)
### Get Sample Dataset
### 获取样例数据
``` ```
sh get_data.sh sh get_data.sh
``` ```
### 保存模型和配置文件 ### Train and Save Model
``` ```
python local_train.py python local_train.py
``` ```
执行脚本后会在当前目录生成ctr_server_model和ctr_client_config文件夹,以及ctr_server_model_kv, ctr_client_conf_kv。 the trained model will be in ./ctr_server_model and ./ctr_client_config, and ctr_server_model_kv, ctr_client_conf_kv。
### 启动稀疏参数服务器 ### Start Sparse Parameter Server
``` ```
cp ../../../build_server/core/predictor/seq_generator seq_generator cp ../../../build_server/core/predictor/seq_generator seq_generator
cp ../../../build_server/output/bin/cube* ./cube/ cp ../../../build_server/output/bin/cube* ./cube/
sh cube_prepare.sh & sh cube_prepare.sh &
``` ```
此处,模型当中的稀疏参数会被存放在稀疏参数服务器Cube当中,关于稀疏参数服务器Cube的介绍,请阅读[单机版稀疏参数服务器Cube](../../../doc/CUBE_LOCAL_CN.md) Here, the sparse parameter is loaded by cube sparse parameter server,for more details please read [Cube: Sparse Parameter Server (Local Mode)](../../../doc/CUBE_LOCAL.md)
### 启动RPC预测服务,服务端线程数为4(可在test_server.py配置 ### Start RPC Predictor, the number of serving thread is 4(configurable in test_server.py
``` ```
python test_server.py ctr_serving_model_kv python test_server.py ctr_serving_model_kv
``` ```
### 执行预测 ### Run Prediction
``` ```
python test_client.py ctr_client_conf/serving_client_conf.prototxt ./raw_data python test_client.py ctr_client_conf/serving_client_conf.prototxt ./raw_data
...@@ -34,17 +37,17 @@ python test_client.py ctr_client_conf/serving_client_conf.prototxt ./raw_data ...@@ -34,17 +37,17 @@ python test_client.py ctr_client_conf/serving_client_conf.prototxt ./raw_data
### Benchmark ### Benchmark
设备 :Intel(R) Xeon(R) CPU 6148 @ 2.40GHz CPU :Intel(R) Xeon(R) CPU 6148 @ 2.40GHz
模型[Criteo CTR](https://github.com/PaddlePaddle/Serving/blob/develop/python/examples/ctr_criteo_with_cube/network_conf.py) Model[Criteo CTR](https://github.com/PaddlePaddle/Serving/blob/develop/python/examples/ctr_criteo_with_cube/network_conf.py)
server core/thread num : 4/8 server core/thread num : 4/8
执行 Run
``` ```
bash benchmark.sh bash benchmark.sh
``` ```
客户端每个线程会发送1000个batch 1000 batches will be sent by every client
| client thread num | prepro | client infer | op0 | op1 | op2 | postpro | avg_latency | qps | | client thread num | prepro | client infer | op0 | op1 | op2 | postpro | avg_latency | qps |
| ------------------ | ------ | ------------ | ------ | ----- | ------ | ------- | ----- | ----- | | ------------------ | ------ | ------------ | ------ | ----- | ------ | ------- | ----- | ----- |
...@@ -54,10 +57,10 @@ bash benchmark.sh ...@@ -54,10 +57,10 @@ bash benchmark.sh
| 8 | 0.044 | 8.230 | 0.028 | 0.464 | 0.0023 | 0.0034 | 14.191 | 563.8 | | 8 | 0.044 | 8.230 | 0.028 | 0.464 | 0.0023 | 0.0034 | 14.191 | 563.8 |
| 16 | 0.048 | 21.037 | 0.028 | 0.455 | 0.0025 | 0.0041 | 27.236 | 587.5 | | 16 | 0.048 | 21.037 | 0.028 | 0.455 | 0.0025 | 0.0041 | 27.236 | 587.5 |
平均每个线程耗时图如下 the average latency of threads
![avg cost](../../../doc/criteo-cube-benchmark-avgcost.png) ![avg cost](../../../doc/criteo-cube-benchmark-avgcost.png)
每个线程QPS耗时如下 The QPS is
![qps](../../../doc/criteo-cube-benchmark-qps.png) ![qps](../../../doc/criteo-cube-benchmark-qps.png)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册