LATEST_PACKAGES.md 3.6 KB
Newer Older
M
MRXLT 已提交
1
# Latest Wheel Packages
M
MRXLT 已提交
2 3 4 5

## CPU server
### Python 3
```
W
wangjiawei04 已提交
6
# Compile by gcc8.2
7
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl
M
MRXLT 已提交
8 9 10 11 12
```

## GPU server
### Python 3
```
W
wangjiawei04 已提交
13
#cuda10.1 with TensorRT 6, Compile by gcc8.2
14
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post101-py3-none-any.whl
W
wangjiawei04 已提交
15
#cuda10.2 with TensorRT 7, Compile by gcc8.2
16
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl
W
wangjiawei04 已提交
17
#cuda11.0 with TensorRT 7 (beta), Compile by gcc8.2
18
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.0.0.post11-py3-none-any.whl
M
MRXLT 已提交
19
```
W
wangjiawei04 已提交
20
**Tips:**  If you want to use CPU server and GPU server at the same time, you should check the gcc version,  only Cuda10.1/10.2/11 can run with CPU server owing to the same gcc version(8.2).
M
MRXLT 已提交
21 22

## Client
23

M
MRXLT 已提交
24 25
### Python 3.6
```
26
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp36-none-any.whl
M
MRXLT 已提交
27
```
28 29
### Python 3.8
```
30
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp38-none-any.whl
31 32 33
```
### Python 3.7
```
34
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
M
MRXLT 已提交
35 36 37 38 39
```

## App
### Python 3
```
40
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.0.0-py3-none-any.whl
M
MRXLT 已提交
41
```
M
MRXLT 已提交
42

43 44 45
## Baidu Kunlun user
for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker [DOCKER IMAGES](./DOCKER_IMAGES.md) 
**We only support Python 3.6 for Kunlun Users.**
W
wangjiawei04 已提交
46 47

### Wheel Package Links
48 49 50 51 52 53

for arm kunlun user
```
https://paddle-serving.bj.bcebos.com/whl/xpu/0.6.0/paddle_serving_server_xpu-0.6.0.post2-cp36-cp36m-linux_aarch64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.6.0/paddle_serving_client-0.6.0-cp36-cp36m-linux_aarch64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.6.0/paddle_serving_app-0.6.0-cp36-cp36m-linux_aarch64.whl
W
wangjiawei04 已提交
54
```
55 56 57 58 59 60
 
for x86 kunlun user
``` 
https://paddle-serving.bj.bcebos.com/whl/xpu/0.6.0/paddle_serving_server_xpu-0.6.0.post2-cp36-cp36m-linux_x86_64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.6.0/paddle_serving_client-0.6.0-cp36-cp36m-linux_x86_64.whl
https://paddle-serving.bj.bcebos.com/whl/xpu/0.6.0/paddle_serving_app-0.6.0-cp36-cp36m-linux_x86_64.whl
W
wangjiawei04 已提交
61
```
W
wangjiawei04 已提交
62 63 64 65 66 67 68 69


### Binary Package
for most users, we do not need to read this section. But if you deploy your Paddle Serving on a machine without network, you will encounter a problem that the binary executable tar file cannot be downloaded. Therefore, here we give you all the download links for various environment.

#### Bin links
```
# CPU AVX MKL
70
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-mkl-0.0.0.tar.gz
W
wangjiawei04 已提交
71
# CPU AVX OPENBLAS
72
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-avx-openblas-0.0.0.tar.gz
W
wangjiawei04 已提交
73
# CPU NOAVX OPENBLAS
74
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-cpu-noavx-openblas-0.0.0.tar.gz
W
wangjiawei04 已提交
75
# Cuda 10.1
76
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-101-0.0.0.tar.gz
W
wangjiawei04 已提交
77
# Cuda 10.2
78
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-102-0.0.0.tar.gz
W
wangjiawei04 已提交
79
# Cuda 11
80
https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-gpu-cuda11-0.0.0.tar.gz
W
wangjiawei04 已提交
81 82 83 84 85 86
```

#### How to setup SERVING_BIN offline?

- download the serving server whl package and bin package, and make sure they are for the same environment
- download the serving client whl and serving app whl, pay attention to the Python version.
87
- `pip install ` the serving and `tar xf ` the binary package, then `export SERVING_BIN=$PWD/serving-gpu-cuda11-0.0.0/serving` (take Cuda 11 as the example)
W
wangjiawei04 已提交
88