BERT_10_MINS.md 4.0 KB
Newer Older
J
Jiawei Wang 已提交
1
## Build Bert-As-Service in 10 minutes
J
Jiawei Wang 已提交
2 3 4

([简体中文](./BERT_10_MINS_CN.md)|English)

J
Jiawei Wang 已提交
5
The goal of Bert-As-Service is to give a sentence, and the service can represent the sentence as a semantic vector and return it to the user. [Bert model](https://arxiv.org/abs/1810.04805) is a popular model in the current NLP field. It has achieved good results on a variety of public NLP tasks. The semantic vector calculated by the Bert model is used as input to other NLP models, which will also greatly improve the performance of the model. Bert-As-Service allows users to easily obtain the semantic vector representation of text and apply it to their own tasks. In order to achieve this goal, we have shown in four steps that using Paddle Serving can build such a service in ten minutes. All the code and files in the example can be found in [Example](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert) of Paddle Serving.
J
Jiawei Wang 已提交
6

J
Jiawei Wang 已提交
7
#### Step1: Save the serviceable model
J
Jiawei Wang 已提交
8

J
Jiawei Wang 已提交
9
Paddle Serving supports various models trained based on Paddle, and saves the serviceable model by specifying the input and output variables of the model. For convenience, we can load a trained bert Chinese model from paddlehub and save a deployable service with two lines of code. The server and client configurations are placed in the `bert_seq20_model` and` bert_seq20_client` folders, respectively.
J
Jiawei Wang 已提交
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

``` python
import paddlehub as hub
model_name = "bert_chinese_L-12_H-768_A-12"
module = hub.Module(model_name)
inputs, outputs, program = module.context(
    trainable=True, max_seq_len=20)
feed_keys = ["input_ids", "position_ids", "segment_ids",
             "input_mask", "pooled_output", "sequence_output"]
fetch_keys = ["pooled_output", "sequence_output"]
feed_dict = dict(zip(feed_keys, [inputs[x] for x in feed_keys]))
fetch_dict = dict(zip(fetch_keys, [outputs[x]] for x in fetch_keys))

import paddle_serving_client.io as serving_io
serving_io.save_model("bert_seq20_model", "bert_seq20_client",
                      feed_dict, fetch_dict, program)
```

J
Jiawei Wang 已提交
28
#### Step2: Launch Service
J
Jiawei Wang 已提交
29 30 31 32

``` shell
python -m paddle_serving_server_gpu.serve --model bert_seq20_model --thread 10 --port 9292 --gpu_ids 0
```
J
Jiawei Wang 已提交
33 34 35 36 37 38
| Parameters | Meaning                                  |
| ---------- | ---------------------------------------- |
| model      | server configuration and model file path |
| thread     | server-side threads                      |
| port       | server port number                       |
| gpu_ids    | GPU index number                         |
J
Jiawei Wang 已提交
39

J
Jiawei Wang 已提交
40
#### Step3: data preprocessing logic on Client Side
J
Jiawei Wang 已提交
41

J
Jiawei Wang 已提交
42
Paddle Serving has many built-in corresponding data preprocessing logics. For the calculation of Chinese Bert semantic representation, we use the ChineseBertReader class under paddle_serving_app for data preprocessing. Model input fields  of multiple models corresponding to a raw Chinese sentence can be easily fetched by developers
J
Jiawei Wang 已提交
43

J
Jiawei Wang 已提交
44
Install paddle_serving_app
J
Jiawei Wang 已提交
45 46 47 48 49

```shell
pip install paddle_serving_app
```

J
Jiawei Wang 已提交
50
#### Step4: Client Visit Serving
J
Jiawei Wang 已提交
51

J
Jiawei Wang 已提交
52
the script of client side bert_client.py is as follow:
J
Jiawei Wang 已提交
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71

``` python
import os
import sys
from paddle_serving_client import Client
from paddle_serving_app import ChineseBertReader

reader = ChineseBertReader()
fetch = ["pooled_output"]
endpoint_list = ["127.0.0.1:9292"]
client = Client()
client.load_client_config("bert_seq20_client/serving_client_conf.prototxt")
client.connect(endpoint_list)

for line in sys.stdin:
    feed_dict = reader.process(line)
    result = client.predict(feed=feed_dict, fetch=fetch)
```

J
Jiawei Wang 已提交
72
run
J
Jiawei Wang 已提交
73 74 75 76 77

```shell
cat data.txt | python bert_client.py
```

J
Jiawei Wang 已提交
78
read samples from data.txt, print results at the standard output.
J
Jiawei Wang 已提交
79

J
Jiawei Wang 已提交
80
### Benchmark
J
Jiawei Wang 已提交
81

J
Jiawei Wang 已提交
82
We tested the performance of Bert-As-Service based on Padde Serving based on V100 and compared it with the Bert-As-Service based on Tensorflow. From the perspective of user configuration, we used the same batch size and concurrent number for stress testing. The overall throughput performance data obtained under 4 V100s is as follows.
J
Jiawei Wang 已提交
83 84

![4v100_bert_as_service_benchmark](4v100_bert_as_service_benchmark.png)