Latest_Packages_CN.md 3.7 KB
Newer Older
M
MRXLT 已提交
1
# Latest Wheel Packages
M
MRXLT 已提交
2 3 4 5

## CPU server
### Python 3
```
W
wangjiawei04 已提交
6
# Compile by gcc8.2
T
TeslaZhao 已提交
7
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl
M
MRXLT 已提交
8 9 10 11 12
```

## GPU server
### Python 3
```
B
bjjwwang 已提交
13
#cuda10.1 Cudnn 7 with TensorRT 6, Compile by gcc8.2
T
TeslaZhao 已提交
14
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post101-py3-none-any.whl
B
bjjwwang 已提交
15
#cuda10.2 Cudnn 7 with TensorRT 6, Compile by gcc5.4
T
TeslaZhao 已提交
16
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl
B
bjjwwang 已提交
17 18 19 20
#cuda10.2 Cudnn 8 with TensorRT 7, Compile by gcc8.2
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post1028-py3-none-any.whl
#cuda11.2 Cudnn 8 with TensorRT 8 (beta), Compile by gcc8.2
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post112-py3-none-any.whl
M
MRXLT 已提交
21 22 23
```

## Client
24

M
MRXLT 已提交
25 26
### Python 3.6
```
T
TeslaZhao 已提交
27
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp36-none-any.whl
M
MRXLT 已提交
28
```
29 30
### Python 3.8
```
T
TeslaZhao 已提交
31
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp38-none-any.whl
32 33 34
```
### Python 3.7
```
T
TeslaZhao 已提交
35
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
M
MRXLT 已提交
36 37 38 39 40
```

## App
### Python 3
```
T
TeslaZhao 已提交
41
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.0.0-py3-none-any.whl
M
MRXLT 已提交
42
```
W
wangjiawei04 已提交
43

J
Jiawei Wang 已提交
44
## Binary Package
W
wangjiawei04 已提交
45 46
for most users, we do not need to read this section. But if you deploy your Paddle Serving on a machine without network, you will encounter a problem that the binary executable tar file cannot be downloaded. Therefore, here we give you all the download links for various environment.

J
Jiawei Wang 已提交
47
### Bin links
W
wangjiawei04 已提交
48 49
```
# CPU AVX MKL
T
TeslaZhao 已提交
50
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-mkl-0.0.0.tar.gz
W
wangjiawei04 已提交
51
# CPU AVX OPENBLAS
T
TeslaZhao 已提交
52
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-openblas-0.0.0.tar.gz
W
wangjiawei04 已提交
53
# CPU NOAVX OPENBLAS
T
TeslaZhao 已提交
54
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-noavx-openblas-0.0.0.tar.gz
W
wangjiawei04 已提交
55
# Cuda 10.1
T
TeslaZhao 已提交
56
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-101-0.0.0.tar.gz
B
bjjwwang 已提交
57
# Cuda 10.2 + Cudnn 7
T
TeslaZhao 已提交
58
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-102-0.0.0.tar.gz
B
bjjwwang 已提交
59 60 61
# Cuda 10.2 + Cudnn 8
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-1028-0.0.0.tar.gz
# Cuda 11.2
B
bjjwwang 已提交
62
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-112-0.0.0.tar.gz
W
wangjiawei04 已提交
63 64
```

J
Jiawei Wang 已提交
65
### How to setup SERVING_BIN offline?
W
wangjiawei04 已提交
66 67 68

- download the serving server whl package and bin package, and make sure they are for the same environment
- download the serving client whl and serving app whl, pay attention to the Python version.
T
TeslaZhao 已提交
69
- `pip install ` the serving and `tar xf ` the binary package, then `export SERVING_BIN=$PWD/serving-gpu-cuda11-0.0.0/serving` (take Cuda 11 as the example)
W
wangjiawei04 已提交
70

J
Jiawei Wang 已提交
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93


## Baidu Kunlun user
for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker [DOCKER IMAGES](./Docker_Images_CN.md) 
**We only support Python 3.6 for Kunlun Users.**

### Wheel Package Links

for arm kunlun user
```
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_server_xpu-0.7.0.post2-cp36-cp36m-linux_aarch64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_client-0.7.0-cp36-cp36m-linux_aarch64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_app-0.7.0-cp36-cp36m-linux_aarch64.whl
```
 
for x86 kunlun user
``` 
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_server_xpu-0.7.0.post2-cp36-cp36m-linux_x86_64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_client-0.7.0-cp36-cp36m-linux_x86_64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.7.0/paddle_serving_app-0.7.0-cp36-cp36m-linux_x86_64.whl
```