README.md 16.8 KB
Newer Older
M
MRXLT 已提交
1 2
([简体中文](./README_CN.md)|English)

D
Dong Daxiang 已提交
3 4
<p align="center">
    <br>
D
Dong Daxiang 已提交
5
<img src='doc/serving_logo.png' width = "600" height = "130">
D
Dong Daxiang 已提交
6 7
    <br>
<p>
8

M
MRXLT 已提交
9

B
barrierye 已提交
10

D
Dong Daxiang 已提交
11 12
<p align="center">
    <br>
B
barrierye 已提交
13 14 15
    <a href="https://travis-ci.com/PaddlePaddle/Serving">
        <img alt="Build Status" src="https://img.shields.io/travis/com/PaddlePaddle/Serving/develop">
    </a>
B
bjjwwang 已提交
16
    <img alt="Release" src="https://img.shields.io/badge/Release-0.6.2-yellowgreen">
D
Dong Daxiang 已提交
17 18 19
    <img alt="Issues" src="https://img.shields.io/github/issues/PaddlePaddle/Serving">
    <img alt="License" src="https://img.shields.io/github/license/PaddlePaddle/Serving">
    <img alt="Slack" src="https://img.shields.io/badge/Join-Slack-green">
D
Dong Daxiang 已提交
20 21
    <br>
<p>
D
Dong Daxiang 已提交
22

W
wangjiawei04 已提交
23
- [Motivation](./README.md#motivation)
W
wangjiawei04 已提交
24
- [AIStudio Tutorial](./README.md#aistuio-tutorial)
W
wangjiawei04 已提交
25 26 27 28
- [Installation](./README.md#installation)
- [Quick Start Example](./README.md#quick-start-example)
- [Document](README.md#document)
- [Community](README.md#community)
W
wangjiawei04 已提交
29

D
Dong Daxiang 已提交
30
<h2 align="center">Motivation</h2>
D
Dong Daxiang 已提交
31

J
Jiawei Wang 已提交
32
We consider deploying deep learning inference service online to be a user-facing application in the future. **The goal of this project**: When you have trained a deep neural net with [Paddle](https://github.com/PaddlePaddle/Paddle), you are also capable to deploy the model online easily. A demo of Paddle Serving is as follows:
W
wangjiawei04 已提交
33

W
wangjiawei04 已提交
34 35 36 37 38 39 40
<h3 align="center">Some Key Features of Paddle Serving</h3>

- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
- **Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
- **Highly concurrent and efficient communication** between clients and servers supported.
- **Multiple programming languages** supported on client side, such as C++, python and Java.

J
Jiawei Wang 已提交
41 42
***

T
TeslaZhao 已提交
43
- Any model trained by [PaddlePaddle](https://github.com/paddlepaddle/paddle) can be directly used or [Model Conversion Interface](./doc/SAVE.md) for online deployment of Paddle Serving.
W
wangjiawei04 已提交
44
- Support [Multi-model Pipeline Deployment](./doc/PIPELINE_SERVING.md), and provide the requirements of the REST interface and RPC interface itself, [Pipeline example](./python/examples/pipeline).
T
TeslaZhao 已提交
45
- Support the model zoos from the Paddle ecosystem, such as [PaddleDetection](./python/examples/detection), [PaddleOCR](./python/examples/ocr), [PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/master/recserving/movie_recommender).
W
wangjiawei04 已提交
46 47 48
- Provide a variety of pre-processing and post-processing to facilitate users in training, deployment and other stages of related code, bridging the gap between AI developers and application developers, please refer to
[Serving Examples](./python/examples/).

D
Dong Daxiang 已提交
49
<p align="center">
D
Dong Daxiang 已提交
50
    <img src="doc/demo.gif" width="700">
D
Dong Daxiang 已提交
51
</p>
D
Dong Daxiang 已提交
52 53


W
wangjiawei04 已提交
54
<h2 align="center">AIStudio Turorial</h2>
W
wangjiawei04 已提交
55

W
wangjiawei04 已提交
56
Here we provide tutorial on AIStudio(Chinese Version) [AIStudio教程-Paddle Serving服务化部署框架](https://www.paddlepaddle.org.cn/tutorials/projectdetail/1555945)
W
wangjiawei04 已提交
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78

The tutorial provides 
<ul>
<li>Paddle Serving Environment Setup</li>
  <ul>
    <li>Running in docker images
    <li>pip install Paddle Serving
  </ul>
<li>Quick Experience of Paddle Serving</li>
<li>Advanced Tutorial of Model Deployment</li>
  <ul>
    <li>Save/Convert Models for Paddle Serving</li>
    <li>Setup Online Inference Service</li>
  </ul>
<li>Paddle Serving Examples</li>
  <ul>
    <li>Paddle Serving for Detections</li>
    <li>Paddle Serving for OCR</li>
  </ul>
</ul>


D
Dong Daxiang 已提交
79
<h2 align="center">Installation</h2>
D
Dong Daxiang 已提交
80

W
wangjiawei04 已提交
81 82
We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](doc/RUN_IN_DOCKER.md). See the [document](doc/DOCKER_IMAGES.md) for more docker images.

J
Jiawei Wang 已提交
83
**Attention:**: Currently, the default GPU environment of paddlepaddle 2.1 is Cuda 10.2, so the sample code of GPU Docker is based on Cuda 10.2. We also provides docker images and whl packages for other GPU environments. If users use other environments, they need to carefully check and select the appropriate version.
W
wangjiawei04 已提交
84

J
Jiawei Wang 已提交
85 86
**Attention:** the following so-called 'python' or 'pip' stands for one of Python 3.6/3.7/3.8.

M
MRXLT 已提交
87 88
```
# Run CPU Docker
B
bjjwwang 已提交
89 90
docker pull registry.baidubce.com/paddlepaddle/serving:0.6.2-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.2-devel bash
M
MRXLT 已提交
91
docker exec -it test bash
W
wangjiawei04 已提交
92
git clone https://github.com/PaddlePaddle/Serving
M
MRXLT 已提交
93 94 95
```
```
# Run GPU Docker
B
bjjwwang 已提交
96 97
nvidia-docker pull registry.baidubce.com/paddlepaddle/serving:0.6.2-cuda10.2-cudnn8-devel
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.2-cuda10.2-cudnn8-devel bash
M
MRXLT 已提交
98
nvidia-docker exec -it test bash
W
wangjiawei04 已提交
99
git clone https://github.com/PaddlePaddle/Serving
M
MRXLT 已提交
100
```
J
Jiawei Wang 已提交
101 102 103
install python dependencies
```
cd Serving
T
TeslaZhao 已提交
104
pip3 install -r python/requirements.txt
J
Jiawei Wang 已提交
105
```
D
Dong Daxiang 已提交
106

D
Dong Daxiang 已提交
107
```shell
B
bjjwwang 已提交
108 109 110 111
pip3 install paddle-serving-client==0.6.2
pip3 install paddle-serving-server==0.6.2 # CPU
pip3 install paddle-serving-app==0.6.2
pip3 install paddle-serving-server-gpu==0.6.2.post102 #GPU with CUDA10.2 + TensorRT7
W
wangjiawei04 已提交
112
# DO NOT RUN ALL COMMANDS! check your GPU env and select the right one
B
bjjwwang 已提交
113 114
pip3 install paddle-serving-server-gpu==0.6.2.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.6.2.post11 # GPU with CUDA10.1 + TensorRT7
D
Dong Daxiang 已提交
115 116
```

M
MRXLT 已提交
117
You may need to use a domestic mirror source (in China, you can use the Tsinghua mirror source, add `-i https://pypi.tuna.tsinghua.edu.cn/simple` to pip command) to speed up the download.
B
barrierye 已提交
118

W
wangjiawei04 已提交
119
If you need install modules compiled with develop branch, please download packages from [latest packages list](./doc/LATEST_PACKAGES.md) and install with `pip install` command. If you want to compile by yourself, please refer to [How to compile Paddle Serving?](./doc/COMPILE.md)
M
MRXLT 已提交
120

W
wangjiawei04 已提交
121
Packages of paddle-serving-server and paddle-serving-server-gpu support Centos 6/7, Ubuntu 16/18, Windows 10.
122

J
Jiawei Wang 已提交
123
Packages of paddle-serving-client and paddle-serving-app support Linux and Windows, but paddle-serving-client only support python3.6/3.7/3.8.
M
MRXLT 已提交
124

J
Jiawei Wang 已提交
125
**For latest version, Cuda 9.0 or Cuda 10.0 are no longer supported, Python2.7/3.5 is no longer supported.**
J
Jiawei Wang 已提交
126

J
Jiawei Wang 已提交
127
Recommended to install paddle >= 2.1.0
D
Dong Daxiang 已提交
128

J
Jiawei Wang 已提交
129

D
Dong Daxiang 已提交
130
```
W
wangjiawei04 已提交
131
# CPU users, please run
T
TeslaZhao 已提交
132
pip3 install paddlepaddle==2.1.0
D
Dong Daxiang 已提交
133

W
wangjiawei04 已提交
134
# GPU Cuda10.2 please run
T
TeslaZhao 已提交
135
pip3 install paddlepaddle-gpu==2.1.0 
D
Dong Daxiang 已提交
136 137
```

W
wangjiawei04 已提交
138
**Note**: If your Cuda version is not 10.2, please do not execute the above commands directly, you need to refer to [Paddle official documentation-multi-version whl package list
W
wangjiawei04 已提交
139
](https://www.paddlepaddle.org.cn/documentation/docs/en/install/Tables_en.html#multi-version-whl-package-list-release)
W
wangjiawei04 已提交
140

J
Jiawei Wang 已提交
141 142
Select the url link of the corresponding GPU environment and install it. For example, for Python3.6 users of Cuda 10.1, please select `cp36-cp36m` and
The url corresponding to `cuda10.1-cudnn7-mkl-gcc8.2-avx-trt6.0.1.5`, copy it and run
W
wangjiawei04 已提交
143
```
T
TeslaZhao 已提交
144
pip3 install https://paddle-wheel.bj.bcebos.com/with-trt/2.1.0-gpu-cuda10.1-cudnn7-mkl-gcc8.2/paddlepaddle_gpu-2.1.0.post101-cp36-cp36m-linux_x86_64.whl
W
wangjiawei04 已提交
145
```
W
wangjiawei04 已提交
146

J
Jiawei Wang 已提交
147
the default `paddlepaddle-gpu==2.1.0` is Cuda 10.2 with no TensorRT. If you want to install PaddlePaddle with TensorRT. please also check the documentation-multi-version whl package list and find key word `cuda10.2-cudnn8.0-trt7.1.3`. More info please check [Paddle Serving uses TensorRT](./doc/TENSOR_RT.md)
W
wangjiawei04 已提交
148

W
wangjiawei04 已提交
149 150
If it is other environment and Python version, please find the corresponding link in the table and install it with pip.

W
wangjiawei04 已提交
151

W
wangjiawei04 已提交
152
For **Windows Users**, please read the document [Paddle Serving for Windows Users](./doc/WINDOWS_TUTORIAL.md)
D
Dong Daxiang 已提交
153

D
Dong Daxiang 已提交
154
<h2 align="center">Quick Start Example</h2>
D
Dong Daxiang 已提交
155

W
wangjiawei04 已提交
156
This quick start example is mainly for those users who already have a model to deploy, and we also provide a model that can be used for deployment. in case if you want to know how to complete the process from offline training to online service, please refer to the AiStudio tutorial above.
D
Dong Daxiang 已提交
157

D
Dong Daxiang 已提交
158
### Boston House Price Prediction model
W
wangjiawei04 已提交
159 160

get into the Serving git directory, and change dir to `fit_a_line`
D
Dong Daxiang 已提交
161
``` shell
W
wangjiawei04 已提交
162 163
cd Serving/python/examples/fit_a_line
sh get_data.sh
D
Dong Daxiang 已提交
164
```
D
Dong Daxiang 已提交
165

D
Dong Daxiang 已提交
166 167
Paddle Serving provides HTTP and RPC based service for users to access

W
wangjiawei04 已提交
168
### RPC service
D
Dong Daxiang 已提交
169

W
wangjiawei04 已提交
170
A user can also start a RPC service with `paddle_serving_server.serve`. RPC service is usually faster than HTTP service, although a user needs to do some coding based on Paddle Serving's python client API. Note that we do not specify `--name` here. 
D
Dong Daxiang 已提交
171
``` shell
T
TeslaZhao 已提交
172
python3 -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292
D
Dong Daxiang 已提交
173
```
D
Dong Daxiang 已提交
174 175
<center>

Z
update  
zhangjun 已提交
176 177
| Argument                                       | Type | Default | Description                                           |
| ---------------------------------------------- | ---- | ------- | ----------------------------------------------------- |
T
update  
Thomas Young 已提交
178 179 180 181
| `thread`                                       | int  | `2`     | Number of brpc service thread                         |
| `op_num`                                       | int[]| `0`     | Thread Number for each model in asynchronous mode     |
| `op_max_batch`                                 | int[]| `0`     | Batch Number for each model in asynchronous mode      |
| `gpu_ids`                                      | str[]| `"-1"`  | Gpu card id for each model                            |
Z
update  
zhangjun 已提交
182
| `port`                                         | int  | `9292`  | Exposed port of current service to users              |
T
update  
Thomas Young 已提交
183
| `model`                                        | str[]| `""`    | Path of paddle model directory to be served           |
Z
update  
zhangjun 已提交
184 185 186 187 188 189 190 191
| `mem_optim_off`                                | -    | -       | Disable memory / graphic memory optimization          |
| `ir_optim`                                     | bool | False   | Enable analysis and optimization of calculation graph |
| `use_mkl` (Only for cpu version)               | -    | -       | Run inference with MKL                                |
| `use_trt` (Only for trt version)               | -    | -       | Run inference with TensorRT                           |
| `use_lite` (Only for Intel x86 CPU or ARM CPU) | -    | -       | Run PaddleLite inference                              |
| `use_xpu`                                      | -    | -       | Run PaddleLite inference with Baidu Kunlun XPU        |
| `precision`                                    | str  | FP32    | Precision Mode, support FP32, FP16, INT8              |
| `use_calib`                                    | bool | False   | Only for deployment with TensorRT                     |
T
update  
Thomas Young 已提交
192 193
| `gpu_multi_stream`                             | bool | False   | EnableGpuMultiStream to get larger QPS                |

T
ud doc  
Thomas Young 已提交
194 195 196 197 198 199 200 201
#### Description of asynchronous model
    Asynchronous mode is suitable for 1. When the number of requests is very large, 2. When multiple models are concatenated and you want to specify the concurrency number of each model.
    Asynchronous mode helps to improve the throughput (QPS) of service, but for a single request, the delay will increase slightly.
    In asynchronous mode, each model will start n threads of the number you specify, and each thread contains a model instance. In other words, each model is equivalent to a thread pool containing N threads, and the task is taken from the task queue of the thread pool to execute.
    In asynchronous mode, each RPC server thread is only responsible for putting the request into the task queue of the model thread pool. After the task is executed, the completed task is removed from the task queue.
    In the above table, the number of RPC server threads is specified by --thread, and the default value is 2.
    --op_num specifies the number of threads in the thread pool of each model. The default value is 0, indicating that asynchronous mode is not used.
    --op_max_batch specifies the number of batches for each model. The default value is 32. It takes effect when --op_num is not 0.
T
update  
Thomas Young 已提交
202 203 204 205 206 207
#### When you want a model to use multiple GPU cards.
python3 -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --gpu_ids 0,1,2
#### When you want 2 models.
python3 -m paddle_serving_server.serve --model uci_housing_model_1 uci_housing_model_2 --thread 10 --port 9292
#### When you want 2 models, and want each of them use multiple GPU cards.
python3 -m paddle_serving_server.serve --model uci_housing_model_1 uci_housing_model_2 --thread 10 --port 9292 --gpu_ids 0,1 1,2
T
ud doc  
Thomas Young 已提交
208 209
#### When a service contains two models, and each model needs to specify multiple GPU cards, and needs asynchronous mode, each model specifies different concurrency number.
python3 -m paddle_serving_server.serve --model uci_housing_model_1 uci_housing_model_2 --thread 10 --port 9292 --gpu_ids 0,1 1,2 --op_num 4 8
D
Dong Daxiang 已提交
210
</center>
W
fix doc  
wangjiawei04 已提交
211 212

```python
D
Dong Daxiang 已提交
213
# A user can visit rpc service through paddle_serving_client API
D
Dong Daxiang 已提交
214
from paddle_serving_client import Client
W
wangjiawei04 已提交
215
import numpy as np
D
Dong Daxiang 已提交
216
client = Client()
D
Dong Daxiang 已提交
217
client.load_client_config("uci_housing_client/serving_client_conf.prototxt")
D
Dong Daxiang 已提交
218
client.connect(["127.0.0.1:9292"])
D
Dong Daxiang 已提交
219
data = [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727,
D
Dong Daxiang 已提交
220
        -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]
W
wangjiawei04 已提交
221
fetch_map = client.predict(feed={"x": np.array(data).reshape(1,13,1)}, fetch=["price"])
D
Dong Daxiang 已提交
222
print(fetch_map)
D
Dong Daxiang 已提交
223
```
D
Dong Daxiang 已提交
224
Here, `client.predict` function has two arguments. `feed` is a `python dict` with model input variable alias name and values. `fetch` assigns the prediction variables to be returned from servers. In the example, the name of `"x"` and `"price"` are assigned when the servable model is saved during training.
D
Dong Daxiang 已提交
225

M
MRXLT 已提交
226

W
wangjiawei04 已提交
227 228 229 230
### WEB service

Users can also put the data format processing logic on the server side, so that they can directly use curl to access the service, refer to the following case whose path is `python/examples/fit_a_line`

W
wangjiawei04 已提交
231
```
T
TeslaZhao 已提交
232
python3 -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --name uci
W
wangjiawei04 已提交
233 234 235 236 237 238 239 240 241
```
for client side,
```
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]}], "fetch":["price"]}' http://127.0.0.1:9292/uci/prediction
```
the response is
```
{"result":{"price":[[18.901151657104492]]}}
```
J
Jiawei Wang 已提交
242
<h3 align="center">Pipeline Service</h3>
J
Jiawei Wang 已提交
243 244

Paddle Serving provides industry-leading multi-model tandem services, which strongly supports the actual operating business scenarios of major companies, please refer to [OCR word recognition](./python/examples/pipeline/ocr).
J
Jiawei Wang 已提交
245 246 247

we get two models
```
T
TeslaZhao 已提交
248
python3 -m paddle_serving_app.package --get_model ocr_rec
J
Jiawei Wang 已提交
249
tar -xzvf ocr_rec.tar.gz
T
TeslaZhao 已提交
250
python3 -m paddle_serving_app.package --get_model ocr_det
J
Jiawei Wang 已提交
251 252 253 254
tar -xzvf ocr_det.tar.gz
```
then we start server side, launch two models as one standalone web service
```
T
TeslaZhao 已提交
255
python3 web_service.py
J
Jiawei Wang 已提交
256 257 258
```
http request
```
T
TeslaZhao 已提交
259
python3 pipeline_http_client.py
J
Jiawei Wang 已提交
260 261 262
```
grpc request
```
T
TeslaZhao 已提交
263
python3 pipeline_rpc_client.py
J
Jiawei Wang 已提交
264 265 266 267 268 269
```
output
```
{'err_no': 0, 'err_msg': '', 'key': ['res'], 'value': ["['土地整治与土壤修复研究中心', '华南农业大学1素图']"]}
```

W
wangjiawei04 已提交
270

D
Dong Daxiang 已提交
271
<h2 align="center">Document</h2>
D
Dong Daxiang 已提交
272

D
Dong Daxiang 已提交
273
### New to Paddle Serving
D
Dong Daxiang 已提交
274
- [How to save a servable model?](doc/SAVE.md)
J
Jiawei Wang 已提交
275
- [Write Bert-as-Service in 10 minutes](doc/BERT_10_MINS.md)
W
wangjiawei04 已提交
276
- [Paddle Serving Examples](python/examples)
J
Jiawei Wang 已提交
277
- [How to process natural data in Paddle Serving?(Chinese)](doc/PROCESS_DATA.md)
J
Jiawei Wang 已提交
278
- [How to process level of detail(LOD)?](doc/LOD.md)
W
wangjiawei04 已提交
279

D
Dong Daxiang 已提交
280
### Developers
J
Jiawei Wang 已提交
281
- [How to deploy Paddle Serving on K8S?(Chinese)](doc/PADDLE_SERVING_ON_KUBERNETES.md)
B
bjjwwang 已提交
282
- [How to route Paddle Serving to secure endpoint?(Chinese)](doc/SERVING_AUTH_DOCKER.md)
B
barrierye 已提交
283
- [How to develop a new Web Service?](doc/NEW_WEB_SERVICE.md)
J
Jiawei Wang 已提交
284
- [Compile from source code](doc/COMPILE.md)
W
wangjiawei04 已提交
285
- [Develop Pipeline Serving](doc/PIPELINE_SERVING.md)
M
MRXLT 已提交
286 287
- [Deploy Web Service with uWSGI](doc/UWSGI_DEPLOY.md)
- [Hot loading for model file](doc/HOT_LOADING_IN_SERVING.md)
W
fix  
wangjiawei04 已提交
288
- [Paddle Serving uses TensorRT](doc/TENSOR_RT.md)
D
Dong Daxiang 已提交
289

D
Dong Daxiang 已提交
290
### About Efficiency
M
MRXLT 已提交
291
- [How to profile Paddle Serving latency?](python/examples/util)
M
MRXLT 已提交
292
- [How to optimize performance?](doc/PERFORMANCE_OPTIM.md)
M
MRXLT 已提交
293
- [Deploy multi-services on one GPU(Chinese)](doc/MULTI_SERVICE_ON_ONE_GPU_CN.md)
J
Jiawei Wang 已提交
294
- [GPU Benchmarks(Chinese)](doc/BENCHMARKING_GPU.md)
D
Dong Daxiang 已提交
295

D
Dong Daxiang 已提交
296
### Design
J
Jiawei Wang 已提交
297
- [Design Doc](doc/DESIGN_DOC.md)
D
Dong Daxiang 已提交
298

W
wangjiawei04 已提交
299 300
### FAQ
- [FAQ(Chinese)](doc/FAQ.md)
D
Dong Daxiang 已提交
301

W
wangjiawei04 已提交
302
<h2 align="center">Community</h2>
D
Dong Daxiang 已提交
303

D
Dong Daxiang 已提交
304
### Slack
D
Dong Daxiang 已提交
305

D
Dong Daxiang 已提交
306 307
To connect with other users and contributors, welcome to join our [Slack channel](https://paddleserving.slack.com/archives/CUBPKHKMJ)

D
Dong Daxiang 已提交
308
### Contribution
D
Dong Daxiang 已提交
309

D
Dong Daxiang 已提交
310
If you want to contribute code to Paddle Serving, please reference [Contribution Guidelines](doc/CONTRIBUTE.md)
D
Dong Daxiang 已提交
311

J
Jiawei Wang 已提交
312 313 314
- Special Thanks to [@BeyondYourself](https://github.com/BeyondYourself) in complementing the gRPC tutorial, updating the FAQ doc and modifying the mdkir command
- Special Thanks to [@mcl-stone](https://github.com/mcl-stone) in updating faster_rcnn benchmark
- Special Thanks to [@cg82616424](https://github.com/cg82616424) in updating the unet benchmark and modifying resize comment error
J
Jiawei Wang 已提交
315
- Special Thanks to [@cuicheng01](https://github.com/cuicheng01) for providing 11 PaddleClas models
P
PaddlePM 已提交
316

D
Dong Daxiang 已提交
317
### Feedback
D
Dong Daxiang 已提交
318

D
Dong Daxiang 已提交
319 320
For any feedback or to report a bug, please propose a [GitHub Issue](https://github.com/PaddlePaddle/Serving/issues).

D
Dong Daxiang 已提交
321 322
### License

D
Dong Daxiang 已提交
323
[Apache 2.0 License](https://github.com/PaddlePaddle/Serving/blob/develop/LICENSE)