LATEST_PACKAGES.md 4.6 KB
Newer Older
M
MRXLT 已提交
1
# Latest Wheel Packages
M
MRXLT 已提交
2 3 4 5

## CPU server
### Python 3
```
W
wangjiawei04 已提交
6
# Compile by gcc8.2
M
MRXLT 已提交
7
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server-0.0.0-py3-none-any.whl
M
MRXLT 已提交
8 9 10 11
```

### Python 2
```
W
wangjiawei04 已提交
12
# Compile by gcc8.2
M
MRXLT 已提交
13
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server-0.0.0-py2-none-any.whl
M
MRXLT 已提交
14 15 16 17 18
```

## GPU server
### Python 3
```
W
wangjiawei04 已提交
19
#cuda 9.0, Compile by gcc4.8
M
MRXLT 已提交
20
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post9-py3-none-any.whl
W
wangjiawei04 已提交
21
#cuda 10.0, Compile by gcc4.8
M
MRXLT 已提交
22
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post10-py3-none-any.whl
W
wangjiawei04 已提交
23
#cuda10.1 with TensorRT 6, Compile by gcc8.2
W
wangjiawei04 已提交
24
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post101-py3-none-any.whl
W
wangjiawei04 已提交
25
#cuda10.2 with TensorRT 7, Compile by gcc8.2
W
wangjiawei04 已提交
26
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post102-py3-none-any.whl
W
wangjiawei04 已提交
27
#cuda11.0 with TensorRT 7 (beta), Compile by gcc8.2
28
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post11-py3-none-any.whl
M
MRXLT 已提交
29 30 31
```
### Python 2
```
W
wangjiawei04 已提交
32
#cuda 9.0, Compile by gcc4.8
M
MRXLT 已提交
33
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post9-py2-none-any.whl
W
wangjiawei04 已提交
34
#cuda 10.0, Compile by gcc4.8
M
MRXLT 已提交
35
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post10-py2-none-any.whl
W
wangjiawei04 已提交
36
#cuda10.1 with TensorRT 6, Compile by gcc8.2
W
wangjiawei04 已提交
37
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post101-py2-none-any.whl
W
wangjiawei04 已提交
38
#cuda10.2 with TensorRT 7, Compile by gcc8.2
W
wangjiawei04 已提交
39
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post102-py2-none-any.whl
W
wangjiawei04 已提交
40
#cuda11.0 with TensorRT 7 (beta), Compile by gcc8.2
41
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_server_gpu-0.0.0.post11-py2-none-any.whl
M
MRXLT 已提交
42
```
W
wangjiawei04 已提交
43
**Tips:**  If you want to use CPU server and GPU server at the same time, you should check the gcc version,  only Cuda10.1/10.2/11 can run with CPU server owing to the same gcc version(8.2).
M
MRXLT 已提交
44 45

## Client
46

M
MRXLT 已提交
47 48
### Python 3.6
```
M
MRXLT 已提交
49
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp36-none-any.whl
M
MRXLT 已提交
50
```
51 52 53 54 55 56 57 58
### Python 3.8
```
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp38-none-any.whl
```
### Python 3.7
```
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp37-none-any.whl
```
M
MRXLT 已提交
59 60 61 62
### Python 3.5
```
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp35-none-any.whl
```
M
MRXLT 已提交
63 64
### Python 2.7
```
M
MRXLT 已提交
65
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_client-0.0.0-cp27-none-any.whl
M
MRXLT 已提交
66 67
```

68

M
MRXLT 已提交
69 70 71
## App
### Python 3
```
M
MRXLT 已提交
72
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_app-0.0.0-py3-none-any.whl
M
MRXLT 已提交
73
```
M
MRXLT 已提交
74 75 76

### Python 2
```
M
MRXLT 已提交
77
https://paddle-serving.bj.bcebos.com/whl/paddle_serving_app-0.0.0-py2-none-any.whl
M
MRXLT 已提交
78
```
W
wangjiawei04 已提交
79 80

## ARM user
W
wangjiawei04 已提交
81
for ARM user who uses [PaddleLite](https://github.com/PaddlePaddle/PaddleLite) can download the wheel packages as follows. And ARM user should use the xpu-beta docker [DOCKER IMAGES](./DOCKER_IMAGES.md) 
W
wangjiawei04 已提交
82 83 84 85 86 87 88 89 90 91 92
**We only support Python 3.6 for Arm Users.**

### Wheel Package Links
```
# Server 
https://paddle-serving.bj.bcebos.com/whl/xpu/paddle_serving_server_gpu-0.0.0.postarm_xpu-py3-none-any.whl
# Client
https://paddle-serving.bj.bcebos.com/whl/xpu/paddle_serving_client-0.0.0-cp36-none-any.whl 
# App
https://paddle-serving.bj.bcebos.com/whl/xpu/paddle_serving_app-0.0.0-py3-none-any.whl 
```
W
wangjiawei04 已提交
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123


### Binary Package
for most users, we do not need to read this section. But if you deploy your Paddle Serving on a machine without network, you will encounter a problem that the binary executable tar file cannot be downloaded. Therefore, here we give you all the download links for various environment.

#### Bin links
```
# CPU AVX MKL
https://paddle-serving.bj.bcebos.com/bin/serving-cpu-avx-mkl-0.0.0.tar.gz
# CPU AVX OPENBLAS
https://paddle-serving.bj.bcebos.com/bin/serving-cpu-avx-openblas-0.0.0.tar.gz
# CPU NOAVX OPENBLAS
https://paddle-serving.bj.bcebos.com/bin/serving-cpu-noavx-openblas-0.0.0.tar.gz
# Cuda 9
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-cuda9-0.0.0.tar.gz
# Cuda 10
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-cuda10-0.0.0.tar.gz
# Cuda 10.1
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-101-0.0.0.tar.gz
# Cuda 10.2
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-102-0.0.0.tar.gz
# Cuda 11
https://paddle-serving.bj.bcebos.com/bin/serving-gpu-cuda11-0.0.0.tar.gz
```

#### How to setup SERVING_BIN offline?

- download the serving server whl package and bin package, and make sure they are for the same environment
- download the serving client whl and serving app whl, pay attention to the Python version.
- `pip install ` the serving and `tar xf ` the binary package, then `export SERVING_BIN=$PWD/serving-gpu-cuda10-0.0.0/serving` (take Cuda 10.0 as the example)