BERT_10_MINS.md 4.5 KB
Newer Older
1
## Build Bert-As-Service in 10 minutes
M
add doc  
MRXLT 已提交
2

3
([简体中文](./BERT_10_MINS_CN.md)|English)
M
add doc  
MRXLT 已提交
4

5
The goal of Bert-As-Service is to give a sentence, and the service can represent the sentence as a semantic vector and return it to the user. [Bert model](https://arxiv.org/abs/1810.04805) is a popular model in the current NLP field. It has achieved good results on a variety of public NLP tasks. The semantic vector calculated by the Bert model is used as input to other NLP models, which will also greatly improve the performance of the model. Bert-As-Service allows users to easily obtain the semantic vector representation of text and apply it to their own tasks. In order to achieve this goal, we have shown in four steps that using Paddle Serving can build such a service in ten minutes. All the code and files in the example can be found in [Example](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert) of Paddle Serving.
M
add doc  
MRXLT 已提交
6

7 8 9
#### Step1: Save the serviceable model

Paddle Serving supports various models trained based on Paddle, and saves the serviceable model by specifying the input and output variables of the model. For convenience, we can load a trained bert Chinese model from paddlehub and save a deployable service with two lines of code. The server and client configurations are placed in the `bert_seq20_model` and` bert_seq20_client` folders, respectively.
M
add doc  
MRXLT 已提交
10

11
[//file]:#bert_10.py
M
add doc  
MRXLT 已提交
12 13 14 15 16 17 18
``` python
import paddlehub as hub
model_name = "bert_chinese_L-12_H-768_A-12"
module = hub.Module(model_name)
inputs, outputs, program = module.context(
    trainable=True, max_seq_len=20)
feed_keys = ["input_ids", "position_ids", "segment_ids",
M
fix doc  
MRXLT 已提交
19
             "input_mask"]
M
add doc  
MRXLT 已提交
20 21
fetch_keys = ["pooled_output", "sequence_output"]
feed_dict = dict(zip(feed_keys, [inputs[x] for x in feed_keys]))
M
fix doc  
MRXLT 已提交
22
fetch_dict = dict(zip(fetch_keys, [outputs[x] for x in fetch_keys]))
M
add doc  
MRXLT 已提交
23 24 25 26 27 28

import paddle_serving_client.io as serving_io
serving_io.save_model("bert_seq20_model", "bert_seq20_client",
                      feed_dict, fetch_dict, program)
```

29
#### Step2: Launch Service
M
add doc  
MRXLT 已提交
30

31
[//file]:#server.sh
M
add doc  
MRXLT 已提交
32 33 34
``` shell
python -m paddle_serving_server_gpu.serve --model bert_seq20_model --thread 10 --port 9292 --gpu_ids 0
```
35 36 37 38 39 40
| Parameters | Meaning                                  |
| ---------- | ---------------------------------------- |
| model      | server configuration and model file path |
| thread     | server-side threads                      |
| port       | server port number                       |
| gpu_ids    | GPU index number                         |
M
add doc  
MRXLT 已提交
41

42
#### Step3: data preprocessing logic on Client Side
M
add doc  
MRXLT 已提交
43

44
Paddle Serving has many built-in corresponding data preprocessing logics. For the calculation of Chinese Bert semantic representation, we use the ChineseBertReader class under paddle_serving_app for data preprocessing. Model input fields  of multiple models corresponding to a raw Chinese sentence can be easily fetched by developers
M
add doc  
MRXLT 已提交
45

46
Install paddle_serving_app
M
add doc  
MRXLT 已提交
47

48
[//file]:#pip_app.sh
M
add doc  
MRXLT 已提交
49 50 51 52
```shell
pip install paddle_serving_app
```

53
#### Step4: Client Visit Serving
M
add doc  
MRXLT 已提交
54

55
the script of client side bert_client.py is as follow:
M
add doc  
MRXLT 已提交
56

57
[//file]:#bert_client.py
M
add doc  
MRXLT 已提交
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75
``` python
import os
import sys
from paddle_serving_client import Client
from paddle_serving_app import ChineseBertReader

reader = ChineseBertReader()
fetch = ["pooled_output"]
endpoint_list = ["127.0.0.1:9292"]
client = Client()
client.load_client_config("bert_seq20_client/serving_client_conf.prototxt")
client.connect(endpoint_list)

for line in sys.stdin:
    feed_dict = reader.process(line)
    result = client.predict(feed=feed_dict, fetch=fetch)
```

76
run
M
add doc  
MRXLT 已提交
77

78
[//file]:#bert_10_cli.sh
M
add doc  
MRXLT 已提交
79 80 81 82
```shell
cat data.txt | python bert_client.py
```

83
read samples from data.txt, print results at the standard output.
M
add doc  
MRXLT 已提交
84

85
### Benchmark
M
add doc  
MRXLT 已提交
86

87
We tested the performance of Bert-As-Service based on Padde Serving based on V100 and compared it with the Bert-As-Service based on Tensorflow. From the perspective of user configuration, we used the same batch size and concurrent number for stress testing. The overall throughput performance data obtained under 4 V100s is as follows.
M
add doc  
MRXLT 已提交
88 89

![4v100_bert_as_service_benchmark](4v100_bert_as_service_benchmark.png)
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105

<!--
yum install -y libXext libSM libXrender
pip install paddlehub paddle_serving_server paddle_serving_client
sh pip_app.sh
python bert_10.py
sh server.sh &
wget https://paddle-serving.bj.bcebos.com/bert_example/data-c.txt --no-check-certificate
head -n 500 data-c.txt > data.txt
cat data.txt | python bert_client.py
if [[ $? -eq 0 ]]; then
    echo "test success"
else
    echo "test fail"
fi
-->